MonoBoss Project is born

Today is uploaded MonoBoss Project's web page (

As we say in the page: "MonoBoss is a technology for building distributed cluster systems. It can be used to solve large-scale computation problems and for deploy reliable and scalable services with high availability".

MonoBoss is based in Mono for the infrastructure and cluster administration. MonoBoss Services should be developed in C# but in the future it wil support C++ and other languages.

00:00 | Comments | MonoBoss

Microsoft way of life

Ayer fuí testigo de una pequeña conversación acerca de Internet Explorer 7:


a. - Ahora uso el nuevo Internet Explorer, ¿lo has visto?.

b. - No.

a. - Pues está muy chulo, es igual que FireFox.


Tras esto comprobé cómo un usuario normal de windows prefiere utilizar Internet Explorer incluso afirmando que es igual que FireFox. Da mucha pena que haya gente tan inconsciente.

00:00 | Comments | Reflexiones, Internet Explorer

Uncompressed VS Compressed File Write

I've got the first results of my "Compressed File Writing" test.

In both graphs, it's drawn the speedup (%) of compressed writing over uncompressed writing. I used [((UncompressedTime / CompressedTime) - 1) * 100] to get values under "0" when the compression+writing time is slower than uncompressed writing. The X axis has the file size, it goes from 256 KB to 128 MB.

I've compared a lot of "bloc size" for writing and graphed the best times for uncompressed writing (the less speedup)

In the first graph we can see that compressing binary data afte writing has a lot of speedup (70% to 140% faster) from 1 MB to 32 MB.

In the second graph we can see that compressing source code always get a lot of speedup, at least from 0 to 128 MB of file size.

These tests are done one time in a partition of my laptop (4200 rpm HD) mounted with "-o sync,dirsync". I want to make other tests to get more accuracy. One teacher and friend (Diego Sevilla Ruiz) pointed me that I should test random reads and writes. That was not my purpose by now. I'm trying to get the speedup limits using compression before writing to the disk. If I want to get random read/write speeds I need to simulate the actions of a FileSystem and usually it works with small block sizes that makes compression+write much faster than uncompressed writing.

00:00 | Comments | Ideas
Institutional web page
Mono Contributor