Re: latency and locality

What I’m suggesting doesn’t exist yet. There was related talk about similar issues on the thermodynamic perversity of generating blocks. If I have just one central node, the system could generate a transaction block in a fraction of a second. If you wanted, it could do this only once every ten minutes. But it wouldn’t need any more than a fraction of a second of CPU time on a single processor.

The reason is purely related to consensus.

So if you had two nodes, you could have them both redundantly capture all transactions, and then in a fraction of a second each could generate a block. They could then exchange block hashes and if they matched, they would have consensus. If they didn’t match you could reach consensus in one of two ways. 1) compare the transactions of each block one by one, and create a block consisting of the union of all transactions. or 2) Just pick one of the blocks and go with it. The second is what bitcoin actually does now.

It just picks using the worlds most expensive coin flipper. You could accomplish the same task with a much cheaper coin flipper and nothing in bitcoin would change one bit.

That is what I mean by “design decision” a it doesn’t change any of the requirements for the system’s features or behavior. It just implements things differently.

—-

So when I talk about distributing the transaction graph throughout a distributed hash table it is just a different possible (but currently non-coded) way of implementing bitcoin’s key feature. The feature of verifying the transfer of bitcoins from one address to another, in a way that makes it impossible to double spend coins.

Optimally, if you had x,000 transactions per minute and 100 nodes, each node would only have to do 1/100th of the work of a single node handling x,000 transactions per minute. Each transaction is only required to be recorded once.

However, with the current bitcoin implementation, if you have 100 nodes, they all run at 100% cpu for x,000 transactions. If you add another 100 nodes, they all still run at 100% cpu but do exactly the same job. We used to call it “government work” when you could do a job with 5 guys, but instead they used 50 guys, because more people working is better for the economy!

So the new design constraint I’m proposing, is that for a given number of nodes (n), it should take (Order (1/n)) or maybe (Order log(n)) work per cpu to handle a given number of transactions. In this case work means CPU time and network communication. My design modifications FAIL if they give up any of the transaction protections of the existing system.

—-

It turns out that the block list itself is not required to validate transactions. Only the transactions are required. So to understand what I’m suggesting you have to think about an equivalent bitcoin implementation without a monolithic block list containing every transaction in the history of the system.

Instead, the transactions are redundantly scattered across all the nodes. No node need keep a complete list off all transactions, but they must be able to quickly retrieve any transaction on demand. Optimally in Order(1) time, but Order(log(n) time would probably suffice. That reduces the storage requirements of any given node to Order(1/n).

Transactions are mapped to nodes by transaction out-points. You generate two unique out-point identifiers by hashing the transaction+out-point information. You then store the transaction redundantly on the five “closest nodes” to each out-point ID. That means for a system with 10,000 nodes. Each transaction is stored with 10X redundancy instead of 10,000X redundancy. (the 10X was an arbitrary choice, the redundancy would be based upon the DHT algorithm and characteristics of the node population.)

Now each of those 10 nodes holds that out-point data completely privately, unless another node can show a “need to know”. In this case a need to know is demonstrated by submitting a signed transaction that includes a given out-point as an in-point. In this case the storage node stores the new transaction. It also returns any known transactions referencing that out-point. If there is a previous transaction in-point associated with the out-point the second transaction is a double spend.

So for any new transaction, to verify it, you send it to the five closest nodes to each in-point on the transaction. They record the transaction and immediately tell you if they’ve seen a double spend. If any have, it’s a bogus transaction, which gets broadcast to the other close nodes.

Now the what I’m suggesting also increases anonymity because you no longer have to broadcast every transaction you make to the world. Also random individuals can’t go poking around in your business.

There are many ways to map bitcoin transactions to DHTs. I just chose an example that would be easy to explain. There may be other possibilities offer improvement. But this one is sufficient to get the point across.

There is also a fun technology called a “dining cryptographers network” that could further improve some of the anonymity aspects of bitcoin.

Once you get away from a system where each node’s influence is proportional to their CPU power, then what else do you use to determine who is (approximately) one person?

11,402 total views, 2 views today

https://bitcointalk.org/index.php?topic=723.msg8103#msg8103