A completely new class of method for computing discrete logarithmsThis paper seems to be about a specific case
http://web.archive.org/web/20250725043122/https://cr.yp.to/dlog/cuberoot-20120919.pdf but in reality, the method is generic. They talk about small discrete logarithms in the same vein that pollard rho has a complexity too high to handle large discrete logarithms
Victor Shoup theorized that no generic discrete logarithm solving method could perform better than x
½. This is indeed the complexity of Pollard Kangaroo and Pollard rho. But he also theorized than an algorithm with precomputation can yield at best a complexity of x
⅓ which means the lower bound to break full sized secp256k1 is far less than the 2
128 estimated security.
This paper is indeed diving in that class of faster speed at the expense of memory storage.
anyone to turn its mathematical description into implementation ? Yes. That paper is the very basis of everything I was talking about numerous times, when saying that the DLP can be solved much faster.
You can also see it in practice whenever you hear anyone talking about precomputed data.
Note that reaching the 1/3 exponent complexity also requires doing the 2/3 exponent pre-work, so for secp256k1, if you want to reach that lower bound, you first have to do 2**170 group operations (and also storing a very large amount of data, depending on the desired DP frequency; in any case, much much more than the number of bits in all the storage drives in existence, raised to the power of 2).
And another thing is that that 1/3 + 2/3 refers to an optimal tradeoff between precomputed effort and solving effort, because there's nothing (except memory and time limits) stopping anyone from computing the full log, storing it, and solving any key in a single O(1) lookup step. And nothing stopping anyone from computing, let's say, half of the full log domain, and solving any key in 2 steps. And so on and so forth.
No, because as far I understand, in the case of
http://web.archive.org/web/20250725043122/https://cr.yp.to/dlog/cuberoot-20120919.pdf the complexity is decreased by the square of the size of the table. And anyway, the challenge here indeed involve computing several discrete logarithms so reusing precomputation would be worthwhile compared to sticking to pollard kangaroo isnt it ?
If we reduce the range of the puzzle 135 to 10^38 using the Kangaroo algorithm and having the specified public key, how long will it take to obtain the private key of the puzzle?
You can reduce the key to 40 bit too, but for this you no need to use math, you could create math and all ok
I don't understand what you mean. Could you explain how to create mathematics?
Simple answer
You and most user, developer at these forum follow math book written within last 100 years, today most of us applying those formula in digital world, just imagin it's not end still you can create your own math, can write a book and also could apply in our development, ,
2 most workable area, finding odd/even for private key
2nd finding key in exact bit range
No probability in above 2 cases, and it's possible
Just you need to act and think like scientist