File Compression Voodoo

bloodge1

Distinguished
Dec 14, 2010
7
0
18,510
I'm trying to duplicate a piece of text as many times as possible and then put it onto a DVD. I first created a xlsx file with the text written in about 100k cells. This file is about 369kb. Then I duplicated the file 1000 times and compressed it. Then I duplicated that file 1000 times and compressed those compressed files. I have repeated this process to the point that now I have a 75kb file that has (I think) 1,000,000,000,000,000,000,000 copies of the original 369kb file. The file is decompressing correctly, and I just don't get how this is possible. Here's a link to the 75kb file I've created. https://www.dropbox.com/s/4xvr2x4ax3kfhpb/1%20octillion.7z?dl=0 Can someone explain how this is working. How am I able to duplicate compressed files, and keep re-compressing them, continue making them smaller and smaller, and have them still uncompress correctly? Seems like I'm squeezing an elephant into a thimble.
 
Solution
Because the same data is there. So all the compression engine does is say "Take this data, and repeat it x thousand times".

If you want a large file that doesn't compress well, use randomness.