/ home / blog
brain activity log

01.04.2006 Saturday - April Fools

1st April. In many countries this is the day of "Jokes". "Prima Aprilis: uwazaj bo sie pomylisz".. which is in Polish and means "1 April, be careful because you might be wrong". "April Fools" and "Pesce d'Aprile". The (net-)jokes already started.

Heh.. after you read some of them, all the of the web starts to look like a joke :D

An Insightful post on slashdot stated:

Try date -u and you will see that it is in fact April Fool's Day.

Note that April Fool's Day, as defined by the International April Pranksters Association, goes by UTC, not by local time zones. Because IAPA is not widely recognized as an international standards body and many people's problems to understand time zones, this has led to some problems since its introduction in 2002. It is especially uncertain for regional publications, which are reluctant to adopt the new standard because they fear to irritate their audience. One example is the Hubsborough Gazette, which famously spread confusion on the evening of March 31st, 2004 (EST) when an article claiming that aliens have attacked the Whitehouse appeared on their website. Despite the seemingly obvious nature of the hoax, many believed it and called the authorities or local clergery for guidance. One family even is reported to have spent two weeks in their backyard bunker. Since then, the editor has announced that they will only publish April Fool's articles during the hours when April 1st of their local time zone and UTC overlap, and take down articles afterwards. Many publications have followed their example in the following years.

The guys from Centrica fooled me about bex2 emitting all zero size output... that would be a problem. It is a 1st April joke..isn't it ? :D



31.03.2006 Friday - diary and other goodies

Hum.. a lot of images (in random order):

  • The Diophantine Equation
  • A nearly black cat
  • The Youla-Kucera theorem (kinda hard proof for the multivariable case)
  • A pair of sweet lips :)
  • The missed Frantz sister's party: mea culpa :(
  • Francesca in the lecture room
  • Silvia pressing for messaging
  • Samuel spending up to the last cent for a drum set
  • The two girls near Skile tripping three times around the block
  • A sensation close to deja-vu
  • Iakko laughing hard after drinking 4 Hoegarden glasses :D
  • ...
Nice four days :)

The promised LU decomposition code is here. Pretty straightforward once the formula is known. The file contains also the algorithms for matrix determinant (computed either by LU or by a recursive adaptation of the Liebnitz fomula) and for matrix inversion. Enjoy :)

"I have a monster in me and if I wake him up he is probably going to kill me"



27.03.2006 Monday - The LU decomposition of a matrix

Back to my fast determinant computed by using the LU decomposition of a matrix.

The idea is to decompose the matrix A in a product of two matrixes L and U in that L is lower triangular and U is upper triangular. We then know that det(A) = det(L) * det(U) and the determinants of triangular matrixes are trivial to compute. The LU decomposition is not unique so we can finally choose some arbitrary elements of L or U. We choose the diagonal of L to be made of all ones so the determinant of L is directly computed as 1.

Determining L and U such that A = LU is a system of m^2 equations (where m is the rank of A) in 2 * m^2 unknown variables. The bounds of L being lower triangular with a diagonal of all ones and U being upper triangular reduce the number of unknown variables to m^2. The system looks like the following one (3 x 3 case):

Which can be expanded to:

This shows us that we can compute each row of L and U independently by solving a m-equantion in m-variables system. In the 3x3 case the three systems look like:

Which can be solved by inverting the coefficient matrix or by Gauss-Jordan manipulation. The Gauss-Jordan solution for the 3x3 case looks like:

There's obviously a pattern in the solution vector and it can be written down as follows:

Now we have a LU decomposition and can compute the determinant just by multiplying together the diagonal elements of U. Nice eh ?

This is pretty fast. While the Liebnitz method costs like n!, the LU method costs like n^3 which is faaaaaar bettter :)

Now one should prove that the LU decomposition exists for every non-singular matrix and verify the numeric stability of the method... I'll do it in the next days while implementing this stuff and report you later.



26.03.2006 Sunday - 1 is a number

  • 1 is a number
  • Every number has a successor (next one) that itself is a number

  • The number "before" the successor is the predecessor (of the successor)
This allows us to define the set N of natural numbers. We give each number a "name" (one, two, three...) that is defined by rules beyond the scope of this document.

  • the predecessor of 1 is 0
We include 0 in the set N to allow the following definitions.
  • We define the operation of sum as follows:
    The sum of number a and number b is computed by applying the "successor" rule to a and the "predecessor" rule to b until b reaches 0.
For example summing four to three leads to the following steps:
  • successor of four is five, predecessor of three is two
  • successor of five is six, predecessor of two is one
  • successor of six is seven, predecessor of one is zero
  • the result is seven

  • We define the operation of subtraction as the inverse of sum. Being pedantic one can define it in terms of successor and predecessor itself: subtracting b from a means applying the predecessor rule to both a and b until b reaches zero.
Oops... we can't subtract b from a when b is in the successor chain of a: we haven't defined the predecessor of zero. We define it now as well as all the other predecessors. This leads us to the definition of the set Z of relative integers. Z obviously includes N.

  • We define multiplication of two numbers a and b as summing b to zero a times. Being pedantic: we start with an accumulator of 0 and sum b to it while applying the predecessor rule to a until it reaches zero.
  • We define division as the inverse operation of multiplication. If a multiplied b given c then c divided b gives a and c divided a gives b.
Oops... we can now attempt to divide b by a when there is no number in Z the multiplied a gives b.
  • Such a "thing" is still a number and is exactly b/a: a rational number.
This leads us to the definition of the set Q of rational numbers: the ones that can be rappresented by a fraction (ratio). Q obviously includes Z and thus N.

  • We define the "power" operation of numbers a and b as muliplying 1 for a b times. Thus a to the power of b is computed as multiplying 1 for a and applying the predecessor rule to b until zero is reached.
  • We define the extracion of the N-th root as the inverse of the power operation. If a to the power of b given c then the b-th root of c gives a.
Oops... the N-th root of certain numbers is not inside Q: you can attempt to extract the N-th root of a number x when there is no number y in Q that leads y^n=x. The square root of 2 is such a number. This leads us to the definition of the set R of real numbers. Obviously R contains Q and thus Z and N. R is dense: between two numbers in R there is always another number.

Oops... this still does not allow the extraction of certain N-th roots of negative numbers. The square root of -1 doesn't lie in R. In other words: there is no such number x in R that x^2=-1. We define such a number as the immaginary unity j (engineers use j :) and extend the set R with it. This leads us to the definition of the set C of complex numbers: the ones that have a real part (that lies in R) and an immaginary part (that doesn't lie in R). We rappresent such numbers as a+jb.

...

Just to make sure you know :)

Now go and conquer the world!



25.03.2006 Saturday - matrixes

Yesterday and today my fundamental problem was inverting a matrix and computing its determinant. I have written a nice template matrix class (I'll publish it on this site sooner or later).

Computing the inverse of a matrix is pretty straightforward. It can be done by Gauss elimination and has computational cost comparable to n^3 where n is the rank. Writing the Gauss elimination algo took a couple of hours yesterday night but afterwards I was able to invert a 1000x1000 matrix in matters of seconds (Athlon64 3500).

A huger problem is the one of computing the determinant. The Liebnitz formula is too complex to implement since it involves enumerating all the permutations of the matrix columns. Enumerations of n elements are n! and such complexity is obviously out of range.

Using the recursive determinant formula is also a heavy approach since it requires a lot of time and a really huge amounts of memory. You have to compute ALL the minors of the matrix... which is too much.

I've written down a hybrid method that uses a sort of Liebnitz approach with the first row and a recursive method for the minors. It's still too much. Computing a 20x20 determinant is not feasible in terms of time (it requires several hours) and it also eats a lot of memory since you need to hold 20x20 + 19x19 + 18x18 + 17x17 + ... + 1x1 matrixes in memory at once.

I've readed of the LU decomposition approach which should be comparable to the inversion in terms of complexity and memory usage. The idea is to decompose our matrix in a product of two matrices: L and U. L should be lower triangular and U upper triangular. Once such a decomposition is known the determinant of the matrix is det(L) * del(U) which are both products of the elements on the diagonal. There is an approach in that L has all 1 elements on the diagonal so det(L) is even 1 (no need to compute it at all).

The problem, now, is to find the LU decomposition. I still have to study better the algorithm. This will be a task for tomorrow tough. Tonight I'll also ask Valeria about it. She's a matematician so she probably will be able to give me some hints.



want more ?

... really ? :D

Browse around then.

You're viewing 5 posts per page: you can view more or less, if you want.

The entries marked in red are the ones you're viewing now.

2007.09.18-06.07
2007.09.09-04.27
2007.08.26-02.23
2007.08.21-02.02
2007.08.12-20.15
2007.07.22-11.30
2007.07.21-10.00
2007.07.11-17.20
2007.06.10-02.57
2007.06.02-03.33
2007.05.30-20.07
2007.05.21-11.35
2007.04.14-21.53
2007.03.26-02.22
2007.03.23-10.12
2007.03.20-18.10
2007.03.16-01.10
2007.03.12-21.43
2007.03.12-03.57
2007.03.05-13.08
2007.02.25-22.21
2007.02.14-23.30
2007.01.02-02.55
2006.12.17-17.01
2006.11.26-20.26
2006.11.22-03.14
2006.11.21-01.30
2006.11.04-05.09
2006.09.16-04.18
2006.08.18-03.45
2006.08.14-17.58
2006.08.14-03.08
2006.08.02-20.38
2006.07.25-04.10
2006.07.25-03.14
2006.06.23-18.12
2006.06.02-13.25
2006.05.18-16.27
2006.05.18-14.30
2006.05.17-19.30
2006.04.29-19.30
2006.04.26-01.48
2006.04.22-13.06
2006.04.16-12.26
2006.04.11-03.10
2006.04.10-04.32
2006.04.08-14.59
2006.04.07-14.54
2006.04.06-09.00
2006.04.05-23.10
2006.04.05-11.00
2006.04.04
2006.04.03
2006.04.02
2006.04.01
2006.03.31
2006.03.27
2006.03.26
2006.03.25
2006.03.24
2006.03.23
2006.03.22
2006.03.20
2006.03.19
2006.03.17
2006.03.14
2006.03.07
2006.03.05
2006.02.23
2006.02.19
2006.02.13
2006.01.10
2005.12.29
2005.09.24
2005.09.21
2005.08.21
2005.08.18
2005.07.31
2005.07.04
2005.06.13
2005.04.10
2005.04.05
2004.12.18
2004.12.17
2004.12.16
2004.12.15
2004.12.14
2004.12.13
2004.12.12
2004.12.11
2004.12.10
2004.12.09
2001.06.02