Re: Not a suggestion


Yes, I am talking about the hypothetical system.

The way I proposed the system, each time a block gets generated every validating node must accept or reject that block by validating the transactions and confirming the hashes in the block. In effect, the same work that is being done with the current system, plus the out-point hash checks. Since the other validators were already competing to generate the block, they already have (at least most of) the transactions.

As with the current system, if the transactions don’t validate (plus match included out-point hashes) the other nodes will reject the block. If the block doesn’t get acceptance by at least 50% of the CPU power, it doesn’t make the block list.

So the presence of the hashes in the block list, signifies that at least 50% of the existing validators at that time saw and validated all the containing transactions and out-point hashes.

Therefore (barring hash crashes) if someone submits an antecedent transaction that matches an unspent out-point, it must be valid.

That antecedent’s antecedent must have been valid as well, otherwise the antecedent would have been rejected. And so on and so on.

For that not to be the case, you have to postulate that there was a period in time where blocks weren’t being validated against out-point hashes. But that’s plausibly implausible with the CPU competition system.


If a client joined the network recently, it did so presuming that prior validators followed the rules and all pre-existing blocks are valid. (No one would join a known corrupt network)

Sure, in the current system, if transactions were never purged, a new node could validate all prior blocks for self consistency. But they still couldn’t prove absolute truth. A bot net could have taken over and erased some transactions leaving “a new truth” and unhappy users. Equivalent to case 1) above.

In the current system, if transactions were Merkle tree purged then you have case 2) above. New comers must trust in the process. Anything missing, they don’t need to worry about. Everyone must presume it was valid.

The unique thing I’m saying is that, if you have confidence in the bitcoin validation competition process (and we do!), then you really don’t need “a 2) thoroughly deep block” to be very deep at all. Someone said in another thread that clients reject any changes to blocks more than two hours old. So we can have absolute confidence in all blocks buried 12 deep.

So if a transaction is unspent and buried 12 deep, we can purge all it’s ancestors. They add warm fuzzies but no additional validation. We have to rely on them. There is simply no way to back up and change course.

After that, every succeeding block presumes all the preceding blocks are true. Otherwise it would be a fork and not a succeeding block. So for any transaction validated against out-points in a preceding block, if those out-points exist and are unspent, they must be presumed valid. If those are presumed valid, their ancestors must be presumed valid even if purged.

In the proposed system, exactly the same things are true.

If an antecedent out-point hash is unspent and buried 12 blocks deep, then it is absolutely unspent. Nothing can change that fact. No point in checking its ancestors. You can finish validating the transaction, cancel the in-points hashes and create new out-point hashes.

Interestingly, if an antecedent out-point hash is unspent and buried LESS THAN 12 blocks deep, then it is RELATIVELY unspent. Curiously, there is still no point in checking its ancestors. The only thing that could change the antecedent’s validity is a branch swap to a longer chain. If an ancestor of the antecedent you are validating this transaction against was swapped out, this transaction would be swapped out as well.

It’s one of those cheesy time machine movie plots. Someone when back in time and spent my ancestor. Now I don’t exist!


So what I’m saying is that in BOTH systems (existing and proposed) the only thing validators need to do is to validate that the antecedent out-points exist and are unspent (for the current block chain). The process assures that everything else remains relatively or absolutely valid.

The rest is just warm fuzzies.

— PS —

I know this is too long and redundant, but I’m to tired to edit. 🙂

I’m not grasping your idea yet.  Does it hide any information from the public network?  What is the advantage?

If at least 50% of nodes validated transactions enough that old transactions can be discarded, then everyone saw everything and could keep a record of it.

Can public nodes see the values of transactions?  Can they see which previous transaction the value came from?  If they can, then they know everything.  If they can’t, then they couldn’t verify that the value came from a valid source, so you couldn’t take their generated chain as verification of it.

Does it hide the bitcoin addresses?  Is that it?  OK, maybe now I see, if that’s it.

Crypto may offer a way to do “key blinding”.  I did some research and it was obscure, but there may be something there.  “group signatures” may be related.

There’s something here in the general area:

What we need is a way to generate additional blinded variations of a public key.  The blinded variations would have the same properties as the root public key, such that the private key could generate a signature for any one of them.  Others could not tell if a blinded key is related to the root key, or other blinded keys from the same root key.  These are the properties of blinding.  Blinding, in a nutshell, is x = (x * large_random_int) mod m.

When paying to a bitcoin address, you would generate a new blinded key for each use.

Then you need to be able to sign a signature such that you can’t tell that two signatures came from the same private key.  I’m not sure if always signing a different blinded public key would already give you this property.  If not, I think that’s where group signatures comes in.  With group signatures, it is possible for something to be signed but not know who signed it.

As an example, say some unpopular military attack has to be ordered, but nobody wants to go down in history as the one who ordered it.  If 10 leaders have private keys, one of them could sign the order and you wouldn’t know who did it.

23,991 total views, 4 views today