brain activity log
23.03.2007 - Friday - 10:12 - LOL
I just got this junk in my mbox. A couple of antispam
systems including my own spamassassin (that
has a very low false-negative rate on this machine)
failed to catch it. Couldn't resist to post it :D
X-Spam-Checker-Version: SpamAssassin 3.1.3-gr0 (2006-06-01) on etherea
X-Spam-Status: No, score=2.6 required=3.5 tests=BAYES_80,HTML_90_100,
HTML_MESSAGE autolearn=no version=3.1.3-gr0
X-Scanned: with antispam and antivirus automated system at libero.it
Delivered-To: pragma at firenze dot linux dot it
Received: from 22.214.171.124 (HELO mx1.punkass.com)
by siena.linux.it with esmtp (B8=9B6:>7'( ;3U*)
for pragma at siena dot linux dot it; Fri, 23 Mar 2007 08:45:13 -0200
From: "Rory Combs" <firstname.lastname@example.org>
To: <pragma at siena dot linux dot it>
Subject: anti-spammers are lamers
Date: Fri, 23 Mar 2007 08:45:13 -0200
X-Mailer: Microsoft Office Outlook, Build 11.0.6353
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1158
X-Virus-Scanned: by amavisd-new-20030616-p10 (Debian) at firenze.linux.it
This is a multi-part message in MIME format.
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; charset=3Dus-ascii">
<meta name=3DGenerator content=3D"Microsoft Word 11 (filtered medium)">
20.03.2007 - Tuesday - 18:10 - Matrices for dummies
A matrix is an ordered, bidimensional collection
of mathematical expressions usually rapresented as a rectangular table.
The horizontal lines in a matrix are called rows and the vertical
lines are called columns. A matrix with m rows and
n columns is called an m-by-n matrix (written mxn)
and m and n are called its dimensions. The dimensions of a matrix
are always given with the number of rows first, then the number of columns.
If the number of rows of a matrix equals the number of columns (m = n) then
the matrix is said to be square otherwise it's just rectangular.
Square matrixes have several interesting properties that we'll talk about later.
The entry of a matrix A that lies in the i-th row and the j-th column is called
the i,j entry or (i,j)-th entry of A. This is written as ai,j,
aij or A[i,j]. The row is always noted first, then the column.
If the entries of a matrix are all real numbers then the matrix is said
to be real. If the entries are complex numbers then
the matrix is too said to be complex. If the entries
are polynomials then (guess what?) the matrix is said to be polynomial too.
The entries of a matrix usually have some associated meaning but we don't
care about that in this article. Now let's just say they are mathematical
expressions (maybe numbers) and concentrate on matrix manipulation.
Let's play with it
We define the matrix sum as an operation that given
two mxn matrices A,B returns a mxn matrix C with entries that are sums
of the corresponding entries in A and B. Please note that the sum
is defined only for matrices of exactly the same dimensions: we say
that such matrices are sum-compatible.
For sum-compatible matrices it's obvious that
We define the scalar multiplication as an operation
that given a mxn matrix A and a scalar expression K returns a mxn matrix B
with each entry made of the corresponding entry of A multiplied by K.
It's again obvious that for sum-compatible matrices A,B and any scalar expression k
and for any matrix A and any couple of scalar expressions
Food for thoughs: Multiplication by scalar is commutative
if the underlying ring (of expressions) is commutative. This is true when the expressions
are (real or complex) numbers or polynomials, that is most real-world cases in that matrices
are applicable. However, the matrix algebra can be applied also to non commutative rings
(for example quaternions) where the multiplication by scalar must be splitted in two
different operations: left multiplication and right multiplication.
Not that obvious
We define the matrix multiplication as an operation
that given a mxp matrix A and a pxn matrix B
returns a mxn matrix C with element i,j computed
as the scalar vector product of the i-th row of A and the j-th column
Note that the matrix multiplication is well defined only for
couples in that the left matrix has the number of columns equal to the
number of rows of the right matrix. We say such two matrices
to be multiplication-compatible.
Food for thoughs: the multiplication
of two nxn matrices processes 2 n2
entries. However there is no known algorithm with computational
cost of O(n2). Most algorithm
are O(n3) and the most clever
implementations are O(n2.8).
An O(n2.376) algorithm has been proposed
by Coppersmith and Winograd but its implicit factor hidden by the O() notation
is so big that its implementation is worthwile only if we're going to
multiply matrices with n that is out of our current computing possibilities.
It's very easy to show that (and here comes the non obvious) the matrix
multiplication is generally not commutative, that is
except for very few special cases. The (square) matrices for that
are said to commute and must satisfy strict rules
on their elements.
The non commutativity of the matrix multiplication makes
the algebraic manipulation to become non trivial and causes
infinite headcaches to engineering students.
However, we're lucky since the associative and distributive properties
still apply and it can be proven that the following equations are all true
(given that the matrices involved are multiplication-compatible and
the underlying ring is commutative).
We define the transpose of a mxn matrix A
as a nxm matrix B obtained from A by swapping rows with columns.
The transpose of a matrix A is often written as AT
or as A'.
Note that swapping rows with means effectively swapping
the order of indices of each element. The element aij
of the matrix A becomes the element aji of
Food for thoughs: This property is interesting in computer
matrix processing. To apply an algorithm to the transpose of a matrix instead of the original
one we can simply swap the parameters of all the matrix element access functions...
A matrix whose transpose is equal to itself is called a symmetric matrix;
that is, A is symmetric if AT = A. Note that A must be
square to be symmetric and internally the elements must satisfy the relation
aij = aji.
It's easy to show that
for any matrix A, thus the transposition is a self-inverse operation.
Also for two matrices with the same dimensions
If the matrices A and B are multiplication-compatible then
Note that the order of multiplication is inverted.
And finally taking the transpose of a scalar (1x1 matrix) is a null operation
A particular square matrix that commutes with all other
matrices of the same size is the identity matrix.
The identity matrix has all unit elements on
its main diagonal.
It's easy to prove that
and thus the identity matrix is the "unity" element of the matrix algebra
and the multiplication by the identity matrix is an idempotent operation.
Obviously the transpose of an identity matrix is still an identity matrix.
Given a square matrix A we define the inverse matrix of A
as the matrix that when multiplied by A gives the identity matrix as result.
The inverse matrix is usually written as A-1.
The inverse matrix does not necessairly exist. A matrix that has no inverse
is said to be non invertible and later we will discover
that it is also singular.
Note that A and its inverse (when it exists) do commute.
Food for thoughs: for non square matrices
we can define the left (A-1A=I) and the
right inverse (AA-1)=I. Such inverses
have few real world applications...
It can be shown that the inverse of a matrix is again invertible and that
for any invertible matrix A and that
for any invertible matrix A and any non null scalar k.
It can be also proven that
for invertible matrices A and B of the same size. Note that the order
of factors is inverted and the formula is very similar to the one
that involves transposition.
Finding the inverse of a matrix is a very common highly intensive computational task.
There are several algorithms that implement this operation and many of
them operate better on matrices with elements that satisfy certain properties
or conformations. The task of finding the inverse is strictly related
to the computation of the determinant which is the argument
of the next lesson. Stay tuned :)
16.03.2007 - Friday - 01:10 - TopCoder
Warning: self-glorification follows: don't read this post.
I've been looking at my past bookmarks tonight and in some deep
folder I've found the link to topcoder.com. I did some
competitions just for fun in 2004, including a Google Code Jam
(hmm... they have promised to send me a T-Shirt but never
fulfilled... bad, bad Google!). Anyway, after only those 3 matches
I've found myself still being in the "outsiders" part of the graph.
Well, ok, Petr,
the top rated member has a score of 3426 which substantially doubles mine but he also did nearly 300 matches
which means that he is playing this thingie weekly since several years now. It's a nice sensation :)
Curious note: the Russian Federation and Poland are the topmost rated countries.
Besides this and after all, the topcoder arena is really fun:
give it a try.
So you've readed the post anyway heh ?
12.03.2007 - Monday - 21:43 - Log (all in one)
I've submitted a kernel qconf patch to kbuild-devel@sf but the list seems to be quite
silent since a couple of weeks. I'll resubmit to the kbuild mantainer if I don't have
feedback in a day or two.
I've tested XGL and beryl recently. You can see some screenshots
on the right. Yes, it's running inside an X window and yes, I have
tried it as a fullscreen server :P
It basically works like a charm. The widow movements are very
cool and the 3d cube really adds a new dimension to the desktop.
I've found only some really minor bugs like the tooltip trick
that you can see in the third photo (but this might be tweakable
The fact is that with the ati proprietary drivers the internal
part of the windows seems to slow down a lot. Most browser pages
flicker and jump a bit when scrolling. With the open source radeon
drivers things should be better but at the moment the RV580 chip
is not supported (X1950 Pro) because ATI is not releasing specs.
Will have to wait a bit more for this to become really usable.
Gabriele is using beryl with the nvidia native support and
he didn't notice any slowdown.
Ethy is preparing
the graphic layout for the new KVirc
web site. It's very nice! You can take a look here.
Next song is "Word Up", as performed by Korn.
Yo, pretty ladies around the world
Very cool because it's powerful and makes you wanna dance :)
Gotta a weird thing to show you
So tell all the boys and girls
Tell your brother, your sister and your momma too
Were about to go down
And you know just what to do
Wave your hands in the air like you don't care
Glide by the people as they start to look and stare
Do your dance, Do your dance
Do your dance quick, mom
C'mon baby tell me what's the word
I'm prolly going to start a collaboration with truelite.
EBS offices definitively moved. Frantz is a bit upset for this reason.
Nearly completed the fourth eTraveler milestone.
Maybe I've found the root cause of a very annoying
disease that follows me since months or maybe even years. Don't worry: nothing deadly,
just extremely disturbing. This would be really cool since this time
I would kill the beast forever instead of constantly fighting the symptoms.
It looks that eutelia is going to
bring wireless connectivity to my home town. I've been waiting for this since years