brain activity log
14.04.2007 - Saturday - 21:53 - Black
A question is how one blogs death. The very complex mental activity,
the tons of information recorded, the interaction with the outside world...
...all turn from reality into memories. Should one ?
I feel like my brain log should record this anyway.
She survived a world war. She survived the death
of her son. She survived a heart attack.
She survived the loneliness and the distance.
All with a great will to live and dedication.
Her brain never gave up. Her body did, today.
Love you Grandma.
26.03.2007 - Monday - 02:22 - Zarabeth
Tonight I've written down the main theme of this
incredibly beautiful love song by Allan Holdsworth. It's
the track n. 5 in the album "Wardenclyffe Tower".
Zarabeth was an inhabitant of the planet Sarpeidon, exiled
to her planet's past by the means of the atavachron time portal.
Capitain Kirk, Spock and Leonard McCoy tried to investigate
on the unique technology of the atavachron and found themselves
teleported to the Sarpeidon's past. Spock increasingly finds himself
attracted to Zarabeth, and disturbingly more emotional and irrational,
and even goes against Vulcan custom by eating meat.
Spock's emotions were released because in this time period Vulcans
had not yet controlled their emotions. Eventually, Spock would find
a door to the portal so they could return to their own period, but Zarabeth
could not leave as she would die if she left her time period.
The above story is in Star Trek episode 3x23 "All Our Yesterdays".
Please note that this is my personal interpretation of the
theme. I've arranged the guitar, bass and keyboard parts
to be played in a "solo" fashion on a single seven string guitar.
In this tune Allan plays a six string baritone guitar that
has a wider extension than a standard one. For these reasons
some of the positions might be plain different from the ones
Allan plays and there might be missing notes.
Some of the positions are very difficult to play because
I've tried to keep the notes ringing as long as possibile.
You might try to arrange them in a somewhat different way.
I'll be playing with it in the next days, trying to
find better positions and to write down the solo.
23.03.2007 - Friday - 10:12 - LOL
I just got this junk in my mbox. A couple of antispam
systems including my own spamassassin (that
has a very low false-negative rate on this machine)
failed to catch it. Couldn't resist to post it :D
X-Spam-Checker-Version: SpamAssassin 3.1.3-gr0 (2006-06-01) on etherea
X-Spam-Status: No, score=2.6 required=3.5 tests=BAYES_80,HTML_90_100,
HTML_MESSAGE autolearn=no version=3.1.3-gr0
X-Scanned: with antispam and antivirus automated system at libero.it
Delivered-To: pragma at firenze dot linux dot it
Received: from 22.214.171.124 (HELO mx1.punkass.com)
by siena.linux.it with esmtp (B8=9B6:>7'( ;3U*)
for pragma at siena dot linux dot it; Fri, 23 Mar 2007 08:45:13 -0200
From: "Rory Combs" <firstname.lastname@example.org>
To: <pragma at siena dot linux dot it>
Subject: anti-spammers are lamers
Date: Fri, 23 Mar 2007 08:45:13 -0200
X-Mailer: Microsoft Office Outlook, Build 11.0.6353
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1158
X-Virus-Scanned: by amavisd-new-20030616-p10 (Debian) at firenze.linux.it
This is a multi-part message in MIME format.
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; charset=3Dus-ascii">
<meta name=3DGenerator content=3D"Microsoft Word 11 (filtered medium)">
20.03.2007 - Tuesday - 18:10 - Matrices for dummies
A matrix is an ordered, bidimensional collection
of mathematical expressions usually rapresented as a rectangular table.
The horizontal lines in a matrix are called rows and the vertical
lines are called columns. A matrix with m rows and
n columns is called an m-by-n matrix (written mxn)
and m and n are called its dimensions. The dimensions of a matrix
are always given with the number of rows first, then the number of columns.
If the number of rows of a matrix equals the number of columns (m = n) then
the matrix is said to be square otherwise it's just rectangular.
Square matrixes have several interesting properties that we'll talk about later.
The entry of a matrix A that lies in the i-th row and the j-th column is called
the i,j entry or (i,j)-th entry of A. This is written as ai,j,
aij or A[i,j]. The row is always noted first, then the column.
If the entries of a matrix are all real numbers then the matrix is said
to be real. If the entries are complex numbers then
the matrix is too said to be complex. If the entries
are polynomials then (guess what?) the matrix is said to be polynomial too.
The entries of a matrix usually have some associated meaning but we don't
care about that in this article. Now let's just say they are mathematical
expressions (maybe numbers) and concentrate on matrix manipulation.
Let's play with it
We define the matrix sum as an operation that given
two mxn matrices A,B returns a mxn matrix C with entries that are sums
of the corresponding entries in A and B. Please note that the sum
is defined only for matrices of exactly the same dimensions: we say
that such matrices are sum-compatible.
For sum-compatible matrices it's obvious that
We define the scalar multiplication as an operation
that given a mxn matrix A and a scalar expression K returns a mxn matrix B
with each entry made of the corresponding entry of A multiplied by K.
It's again obvious that for sum-compatible matrices A,B and any scalar expression k
and for any matrix A and any couple of scalar expressions
Food for thoughs: Multiplication by scalar is commutative
if the underlying ring (of expressions) is commutative. This is true when the expressions
are (real or complex) numbers or polynomials, that is most real-world cases in that matrices
are applicable. However, the matrix algebra can be applied also to non commutative rings
(for example quaternions) where the multiplication by scalar must be splitted in two
different operations: left multiplication and right multiplication.
Not that obvious
We define the matrix multiplication as an operation
that given a mxp matrix A and a pxn matrix B
returns a mxn matrix C with element i,j computed
as the scalar vector product of the i-th row of A and the j-th column
Note that the matrix multiplication is well defined only for
couples in that the left matrix has the number of columns equal to the
number of rows of the right matrix. We say such two matrices
to be multiplication-compatible.
Food for thoughs: the multiplication
of two nxn matrices processes 2 n2
entries. However there is no known algorithm with computational
cost of O(n2). Most algorithm
are O(n3) and the most clever
implementations are O(n2.8).
An O(n2.376) algorithm has been proposed
by Coppersmith and Winograd but its implicit factor hidden by the O() notation
is so big that its implementation is worthwile only if we're going to
multiply matrices with n that is out of our current computing possibilities.
It's very easy to show that (and here comes the non obvious) the matrix
multiplication is generally not commutative, that is
except for very few special cases. The (square) matrices for that
are said to commute and must satisfy strict rules
on their elements.
The non commutativity of the matrix multiplication makes
the algebraic manipulation to become non trivial and causes
infinite headcaches to engineering students.
However, we're lucky since the associative and distributive properties
still apply and it can be proven that the following equations are all true
(given that the matrices involved are multiplication-compatible and
the underlying ring is commutative).
We define the transpose of a mxn matrix A
as a nxm matrix B obtained from A by swapping rows with columns.
The transpose of a matrix A is often written as AT
or as A'.
Note that swapping rows with means effectively swapping
the order of indices of each element. The element aij
of the matrix A becomes the element aji of
Food for thoughs: This property is interesting in computer
matrix processing. To apply an algorithm to the transpose of a matrix instead of the original
one we can simply swap the parameters of all the matrix element access functions...
A matrix whose transpose is equal to itself is called a symmetric matrix;
that is, A is symmetric if AT = A. Note that A must be
square to be symmetric and internally the elements must satisfy the relation
aij = aji.
It's easy to show that
for any matrix A, thus the transposition is a self-inverse operation.
Also for two matrices with the same dimensions
If the matrices A and B are multiplication-compatible then
Note that the order of multiplication is inverted.
And finally taking the transpose of a scalar (1x1 matrix) is a null operation
A particular square matrix that commutes with all other
matrices of the same size is the identity matrix.
The identity matrix has all unit elements on
its main diagonal.
It's easy to prove that
and thus the identity matrix is the "unity" element of the matrix algebra
and the multiplication by the identity matrix is an idempotent operation.
Obviously the transpose of an identity matrix is still an identity matrix.
Given a square matrix A we define the inverse matrix of A
as the matrix that when multiplied by A gives the identity matrix as result.
The inverse matrix is usually written as A-1.
The inverse matrix does not necessairly exist. A matrix that has no inverse
is said to be non invertible and later we will discover
that it is also singular.
Note that A and its inverse (when it exists) do commute.
Food for thoughs: for non square matrices
we can define the left (A-1A=I) and the
right inverse (AA-1)=I. Such inverses
have few real world applications...
It can be shown that the inverse of a matrix is again invertible and that
for any invertible matrix A and that
for any invertible matrix A and any non null scalar k.
It can be also proven that
for invertible matrices A and B of the same size. Note that the order
of factors is inverted and the formula is very similar to the one
that involves transposition.
Finding the inverse of a matrix is a very common highly intensive computational task.
There are several algorithms that implement this operation and many of
them operate better on matrices with elements that satisfy certain properties
or conformations. The task of finding the inverse is strictly related
to the computation of the determinant which is the argument
of the next lesson. Stay tuned :)
16.03.2007 - Friday - 01:10 - TopCoder
Warning: self-glorification follows: don't read this post.
I've been looking at my past bookmarks tonight and in some deep
folder I've found the link to topcoder.com. I did some
competitions just for fun in 2004, including a Google Code Jam
(hmm... they have promised to send me a T-Shirt but never
fulfilled... bad, bad Google!). Anyway, after only those 3 matches
I've found myself still being in the "outsiders" part of the graph.
Well, ok, Petr,
the top rated member has a score of 3426 which substantially doubles mine but he also did nearly 300 matches
which means that he is playing this thingie weekly since several years now. It's a nice sensation :)
Curious note: the Russian Federation and Poland are the topmost rated countries.
Besides this and after all, the topcoder arena is really fun:
give it a try.
So you've readed the post anyway heh ?
want more ?
... really ? :D
Browse around then.
You're viewing 5 posts per page: you can view more or less, if you want.
The entries marked in red are the ones you're viewing now.