I've got the first results of my "Compressed File Writing" test.

In both graphs, it's drawn the speedup (%) of compressed writing over uncompressed writing. I used [((UncompressedTime / CompressedTime) - 1) * 100] to get values under "0" when the compression+writing time is slower than uncompressed writing. The X axis has the file size, it goes from 256 KB to 128 MB.

I've compared a lot of "bloc size" for writing and graphed the best times for uncompressed writing (the less speedup)

In the first graph we can see that compressing binary data afte writing has a lot of speedup (70% to 140% faster) from 1 MB to 32 MB.

In the second graph we can see that compressing source code always get a lot of speedup, at least from 0 to 128 MB of file size.

These tests are done one time in a partition of my laptop (4200 rpm HD) mounted with "-o sync,dirsync". I want to make other tests to get more accuracy. One teacher and friend (Diego Sevilla Ruiz) pointed me that I should test random reads and writes. That was not my purpose by now. I'm trying to get the speedup limits using compression before writing to the disk. If I want to get random read/write speeds I need to simulate the actions of a FileSystem and usually it works with small block sizes that makes compression+write much faster than uncompressed writing.