It appears that blk0001.dat, where bitcoin stores block chain information, is compatible across Windows, Linux, 32-bit and 64-bit.
Therefore, why not save new users some time by shipping blocks 1-74000 with each release?
Presumably, indexing and verifying a local file would be faster, and use fewer network resources, than downloading all those blocks via P2P.
It’s not the downloading that takes the time, it’s verifying and indexing it.
Bandwidthwise, it’s more efficient than if you downloaded an archive. Bitcoin only downloads the data in blk0001.dat, which is currently 55MB, and builds blkindex.dat itself, which is 47MB. Building blkindex.dat is what causes all the disk activity.
During the block download, it only flushes the database to disk every 500 blocks. You may see the block count pause at ??499 and ??999. That’s when it’s flushing.
Doing your own verifying and indexing is the only way to be sure your index data is secure. If you copy blk0001.dat and blkindex.dat from an untrusted source, there’s no way to know if you can trust all the contents in them.
Maybe Berkeley DB has some tweaks we can make to enable or increase cache memory.
81,194 total views, 16 views today
https://bitcointalk.org/index.php?topic=1931.msg24438#msg24438