Does this mean anything? I don't know -- but it tells me *immediately* that the problem requires a different kind of analysis than what you would do for Momentum! For example, what if you started by sampling lg(n) edges and then growing and pruning that set through 42 rounds? With some reasonable probability, you'd extend into one of the cycles. Not guaranteed, but it doesn't have to be guaranteed - I can always start over.
To have 50% odds of at least one L-cycle edge, you need to sample about 1.5%
log(n) doesn't cut it.
To quote myself back: "Not guaranteed, but ... can always start over." In other words, you can settle for a 20% chance if you're willing to run the sampling test a few times, if it saves you enough memory.
Remember, part of the game is that you have to take the entire context of the search process into account: An attacker can change the random seed, as well, at some modest cost. The definition of the problem is to find
a cycle, not
the cycle. Again - time/memory. There are a lot of different ways to try to exploit such a tradeoff. This gets back to the issue I emailed you with about searching for a biased initial distribution. Neither of us knows whether or not it would work.
Not at all. The primes have an entirely different goal. The point is that you're advertising cuckoo cycle as having a certain set of desirable properties. Someone shopping for a PoW wants a certain set of properties - some people want the social/scientific good of primes.
So you're saying I mismarketed Cuckoo Cycle. I could have said.
Let's have a PoW based not on number-theoretical, but on graph-theoretical structures.
Existence of a certain small-size subgraph in a large random graph makes for an easily
verifiable PoW. We could look for cliques, or for ... cycles!
Then you'd be fine with it?!
Your writeup of Cuckoo Cycle, as of Mar 28 09:28, said:
"We introduce the very first proof-of-work system that is both verify-trivial and tmto-hard."
Period. There were no qualifiers attached to that statement. No conjectures, no conditionals, until section 10, where you note that it's a conjecture.
Yes - if you said "We introduce the first Proof-of-Work scheme based upon graph-theoretical structures. We hypothesize but have not proved that this PoW may lead to certain memory-hardness properties" I'd be totally fine with it, because it would be an accurate description of what you've done.
Yes. It's non-trivial to "store this bit vector in a compact way" but it can be done. See the SDSL link I provided, for example.
In the first round of full U and V trimming, you pass through a state of the bit vector
having about half 0s and half 1s. At that point it's completely incompressible.
Even with a 10% vs 90% distribution (that takes at least 2 full rounds to reach)
it would be practically infeasible to compress.
ok - one last spoonful and I'm done:
Create a bitvector representing the first half of the nonces. Call it the "lower half".
Iterate on this bitvector using the full nonce state *multiple times* until you have eliminated all candidates in the "lower half" bitvector that you can relative to the full set of candidates in the lower half (some of which are being eliminated) and all candidate nonces in the "upper half" (which you're not eliminating because you don't have memory).
Then, in piecewise fashion, identify a size-appropriate subset of the specific incident edges on the lower half that are causing problems -- in other words, which nonces in the "upper half" are causing collisions in the "lower half" that prevent you from discarding some of the lower-half entries?
For that small subset (of size << 1/2N -- let's say 1/8th N), do the same process: Eliminate them relative to your now-sparsified lower half and the complete upper half. This should allow you to eliminate more of the lower-half edges. Repeat until the lower-half is sufficiently sparse that you can represent it in reasonable compressed space, and move on to the upper half.
Is it awesome? No. Have I thought about it for more than 5 minutes? No. Could an expert come up with a drastically better algorithm?
Probably.
Stop trying to defend your algorithm for a few minutes and start trying to
attack it. Your goal in this should not be to try to refute individual objections, because that process can go on forever, fruitlessly. Refute
categories of objections based upon sound reasoning and theoretical analysis of the difficulty of the underlying problem!
I'm done.