Blockchain – What is the logic behind the method for calculating block proofs (in GetBlockProof)?


Uses Bitcoin Core GetBlockProof() A function to determine the block’s contribution to the overall difficulty of the current chain, aka, nChainWork in CBlockIndex. I’m having trouble understanding the logic in this app.

Let’s ignore the sports gymnastics that was done there to avoid flooding uint256. The only part we need is the beginning of the comment:

// We need to compute 2**256 / (bnTarget+1)

The question is simply why?!

If I wanted to calculate the block’s difficulty contribution to the chain, my (maybe naive) reasoning would be: the lower the block’s hash value, the higher its contribution. While the equation mentioned in the comment achieves this, it does not do so linearly, but in f(x)=1/x fashion. I do not understand why. Again, my (possibly naive) implementation would be simply:

~arith_uint256(0) - UintToArith256(block.GetBlockHash())

Which basically means: the lower the hash of the block, the greater its contribution to the difficulty. And if for some reason it’s bad to use block hashing (I’d appreciate explaining why if that’s the case), then a similar fit can be made to just the target:

~arith_uint256(0) - bnTarget

Can someone explain why this partition was chosen?



Source link

Related Posts