500 million files
Today I got to wondering what it takes to create a directory structure with 500 million files, on Linux.
First, you need 500 million inodes and 500 million blocks, plus whatever is required for the directories themselves. Since a block on ext2 is at least 1k, that implies it will take at least 0.5TB of disk space. I don't have that much space free on my laptop for such a silly experiment, so I decided to set myself a more interesting goal, and see how much I could cram into 4GB instead.
By tweaking the inode-to-block ratio, I was able to create a 4GB file system with 2 million inodes and 3.6 million 1k blocks, like so:
$ dd if=/dev/zero of=testfile.ext2 -bs=1M -count=4096 $ mke2fs -N 2000000 -b 1024 testfile.ext2
This I mounted using the loopback driver:
$ mkdir mountpoint $ sudo mount testfile.ext2 mountpoint -o loop,async,noatime
Then I started creating directories and just experimenting.
One of the things I discovered, which I had forgotten, is that a hard link takes very little space because it is contained entirely inside the directory entry - it has neither a block nor an inode of its own.
However, I also discovered that a single file can only have 32000 hard links to it, most likely due to an upper limit on the reference counter.
The up-shot of this is, that I can actually create what looks like millions of files in this 4GB filesystem with relative ease. Assuming a 36 byte file-name, hard-links are costing about 42 bytes per "file". After counting the overhead of creating half a million directories, this leaves enough space for... 71 million files. Give or take.
So, assuming a similar structure, I should be able to create 500 million files in roughly 30 GB of space.
Of course, they will all have the same unexciting contents, but hey, at least the filenames can be entertaining or insightful! What this might be useful for, if anything, is left as an exercise for the reader...
Note: The mke2fs options above are not optimal for this exercise, as it turns out most of my directory entries ended up being relatively large, and I didn't need nearly that many inodes. Better values would would have been more efficient, lowered overhead and ultimately reduced the number of disk seeks.