<<  >> (p.288)
    Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 324900 times)
    kTimesG
    Full Member
    ***
    Offline Offline

    Activity: 546
    Merit: 166


    View Profile
    September 06, 2024, 06:57:43 PM
     #5741

    First:
    Quote
    In the "herd" case, you can't really call self-collisions useless, since you had two branches that, before reaching a common DP, individually produced different other DPs up to that point. So, not really fruitless.

    It's like I have to asks a lot of questions first, when talking with you. Are you talking any DP or just DP 0. I normally never talk in terms of DP 0, because it's unrealistic for the range I am working in (the one that this thread is about, the challenge puzzle wallets, the 130 bit wallet)

    Yeah, I can call them fruitless. You assume that every branch found DPs before reaching a common DP.

    DP 32 - If thread 1 started at xx random point and finally found it's first DP after 2^30 "jumps" and thread 2 started at xx2 point and landed on the same DP after 2^32 jumps, then yes, thread 2's first jumps out of the gate were fruitless. It did not find any DPs prior to finding the same one as thread 1.

    ... is the method you speak of, using that or a variant of it to avoid fruitless cycles/paths? Or does it not bother or care about fruitless paths?

    You forgot the fact that the "fruitless" one started from a known position that (if it is not taken randomly) somehow represents a portion of the interval (in respect to the probabilities, which are equal). If you choose starting points randomly, than the analysis of what "fruitless" means will also end up being random as well. What exact thought process are you apply-ing when you are computing the probability of success, when the points are taken at random?

    The only paper I've read so far that did this was Bernstein's pre-computation research, and even in that one, it states that "we are not claiming this to be optimal". And even in that case, it was specifically used for precomputing lots of DPs when the problem was "solve more than one DLP in the same range". So, do you have any actual paper or proof that using random start points result in an improved lower runtime?

    From a point of view of probabilities, one walk = one representative -> keep it running until problem solved. On collision with a buddy, take a step forward, and continue, don't start from some other beginning, because you are "stepping" over some other representative's walk if you do that, and don't get surprised if what you call "fruitless" (first DP = death, ouch) happens because of exactly this, behind the scenes.

    I hope that makes sense, and the DP is irrelevant for this explanation.

    Quote
    No "max travel limit" of any sorts, no random starting points mentioned anywhere. Not sure who and why came up with those ideas.
    What do you mean no one has ever mentioned the above?? Are you speaking of a specific variant or that none of the variants mention the above or just that van Oorschot's variant doesn't mention it?

    The Gaudry-Schost algorithm mentions both. I thought Oorschot and Wiener's did too. But I'd have to reread theirs.

    I mean that no one has mentioned using random starting points as the strategy. What is unclear? Or do you have an actual paper by someone who actually recommended doing this for some reason?...

    Gaudry-Schost IS NOT kangaroo, and has nothing to do with it. It's using the birthday paradox, so obviously wants random uniform sampling from the set. I thought the difference should be clear, they don't really have anything in common.

    Off the grid, training pigeons to broadcast signed messages.
Page 287
Viewing Page: 288