I still dont understand the realistic logical sense 😅🙏
Here's the breakdown.
import os
high_precision_estimated_range = int.from_bytes(os.urandom(4)) >> 2
high_precision_estimated_range |= 1 << 30
print(
'High precision estimated range: '
f'0x{high_precision_estimated_range:08x}0000000000'
f' : 0x{high_precision_estimated_range:08x}ffffffffff'
)
Okay i understand now seems logical 🤪🤪🤪🤪🤪🤪
I think Ive approached all of this the wrong way.
Im offering a 0.1 BTC bounty for the formal proof of any traversal method that provides a statistical edge over a linear scan for puzzle 69. By statistical edge I mean that this new traversal method running on a statistically significant number of executions requires significantly fewer checks (lets put the threshold at 5%) to find the key.
Conditions :
- Has to be written using math semantics. Not where does John lives metaphors.
- Has to be empirically validated using a python nodeJS script.
- First one posting it to this thread will be recipient of the bounty.
What about low bit puzzle. Like puzzle 47 -53 ? Mybe lower prize bounty 🙏 😅
Yes puzzle 47 can be brute fastly. But just to proof the method work. Cause in bigger puzzle the "thing" that observed also must be bigger 🙏
For example in puzzle 49. In order to make my method accurate in small bits puzzle it will require to hash
36 bits for 200-300 times
To reduce until 6.25% area (reducing 4 bits)
And left 44 bits to hash
In case of puzzle 69 i cant gather the data to proof it ... and i dont know yet it will do as accurate as small bits puzzle or not 😅🙏... cause to make my method work and might be accurate it require to hash 56 bits for approx 450 -600 times to reduce until 6.25% area (reduce 4 bits) and you still need to brute the remaining
while it can reduce to 6.25 % but the total effort still bout 1/5 vs brute the whole 68 bits 🙃🙏.
Mybe in future i discover more interesting method .. for now it still not efficient 😅✌️.. cause to make it work , my method need comparison and from small bias vs big bias and several datapoint with big and small sample combined to first observe the big behaviour around it 🙏🙏..
If it not , then the whole method will be not accurate....
As my debate with kTimesG about beach and sand ... what if i say .. if you gather not one scoop but lots small scoop and another big scoop (nearly 1/4 of the whole sand on the beach) and some combination , you might "estimate" the big relative picture.....
It is not about some magical formula or etc, cause the behaviour of each point of answer relative to the question keyspace area is always different....
What i mean it is not as simple as implement what works in one puzzle will work in another puzzle....
And i want to say that finding some traversal method that provides a statistical edge that simply work in any puzzle and validated with python seems bit impossible , cause the whole behaviour in specific condition is always different as i observe...
Or mybe im wrong , time will tell 🙏