Your "metahash" method of measuring work, though more accurate than the probablisitic methods of gavinandresen and tcatm, is a major weakness of your method. In order to check a portion of a client's work, you have to duplicate it. This will not scale well to even tens of clients. An attacker submitting bad metahashes to earn coins without really doing the work will just shut down when found out and start a new client on a new IP.
You would have to exclude new clients until a certain proportion of their results have been checked and then it becomes a probability game where the attackers only falsify perhaps an increasing proportion of their results.
Obviously you don't compute every single metahash sent by the clients. You hash them periodically, and when an erroneous one is found, or you have a suspicious client, you check more of them.
More seriously, you exclude clients (and servers) which are not 100% reliable. I know from Mersenne prime testing that some computers occasionally produce bad results. You can run normal software for years without noticing an error rate of one mistake in every 2^40 ops but your metahash would be very sensitive to such errors. This would result in people running genuine clients on slightly imperfect machines getting annoyed. Note that these imperfect machines are fine for normal hash generation or the probablistic hash rate calculation.
Erroneous results would be a factor for any method of client verification. The solution is to allow a certain amount of error.
You seem to think that the probablistic hash rate measurement schemes are insufficiently accurate. You might wish to do some calculations to convince yourself otherwise. There's already lots of unavoidable unfair randomness in the amount of computation required to produce a new block.
Indeed, there is a lot of randomness involved, causing the block generation rate to vary widely. Smoothing the client's reported hash rate out would require averaging over an unacceptable long time period. This unfairly penalizes clients who wish to hash for short periods of time, requires complex calculation logic, and results at best in rough estimates.
The second issue is that each miner would now need to check the calculated hash against the target after every hash. This is another operation the client must perform and it will slow down generation.
Eh? This happens anyway! How else do you tell if you've got a winning hash?
You compare 1 byte of the hash and only if that byte is 0 do you fully check the hash.
What's to stop a client from lying? They simply generate a few hashes just to send and tell the server, "These are the best hashes I came up with, honest."
Nothing. But then the inferred hash rate is very low. To clarify - the hash rate is calculated from the quality and/or number of hashes and NOTHING ELSE. The client doesn't say that it's done a certain amount of work - just the hashes matter.
That's exactly how it is now.
The server creates the block, which obviously doesn't include transactions utilizing the coins that are generated by that same block, mainly because there are no transactions derived from the block since no one knows about the block yet except the server.
Ah. I thought you and I were thinking along the same lines of an elegant payment method that requires no trust and cannot be scammed which is unfortunately forbidden by a fairly inessential and inelegant Bitcoin rule.
You need to go into more detail about how payment works. The solutions I can think of require the client to trust the server and/or can be scammed.
ByteCoin
The server is capable of sending the block with all transactions to the client for verification. It does not do so now, but the code would not be too difficult to add. There is actually a comment in the code to add this feature later. This way the client can verify they will get their share. If you would like more detail, the code is the best source of detail you can get.
I bet you'd get a good approximation of hash rate if clients submitted their best (highest difficulty) hash every N minutes. Over a period of a few hours the average of all of those best hashes should be proportional to the client's hash rate (unless a client were somehow repeatedly very lucky or unlucky, but that would be extremely unlikely).
What's to stop a client from lying? They simply generate a few hashes just to send and tell the server, "These are the best hashes I came up with, honest." In reality the client is spending the rest of the time trying to generate his own block. There is absolutely no way to verify that the client is not lying this way. With the metahash approach, you can verify every individual hash a client has reported solving. If they are lying, even about 1 of those hashes, you will know because the metahash doesn't match.
Maybe you don't get what he's saying. Let's look at a target hash say 10 characters, the more 0's at the front the better.
A bitcoin block needs 9 zeros to get a 50 coin block.
You ask for their best result every 10 minutes. If they work for 1 minute on an average machine, they give you a result with 2 zeros. If they work for the whole 10 minutes, they give you a result with 4 zeros. All the results that they are looking for, would be based on the current hash, so there is NO wasted work. They just store their best in the client for the current hash, and send it when the server requests it.
Switch back to the way bitcoin works. Difficulty factor of 1398. Someone trying to cheat sends you back a has worth a difficulty of 1. A real client sends you back a difficulty of 7. Someone on a GPU machine sends you back a difficulty of 190.
The better way to do it, would be to request their best POW for each block. You could take the time each block took, the hash that they were able to find (which verify very quickly on the server side) and come up with a formula of what share they would get. The formula may include smoothing out the high peaks and lows.
It may be the most complicated method... but it sounds so far to me, like the most accurate way to get both a relatively honest answer, and have it be relatively hack proof. All the while generating an appropriate potential block.
How do they hack finding the best hash? If they are finding a stronger hash, they very well could be finding an actual block. Nothing is wasted in this scenario.
I understand what this suggestion is, but I don't see it as a reliable method. It assumes you will generate a specific difficulty at a specific hash rate in a set amount of time. We all know this doesn't happen. How do you factor in unlucky clients? What about the lucky ones? The best you can do is take an average over an unacceptably long period of time to smooth out the hash rate. I think this method is too imperfect, and has potential to penalize or over reward clients too easily. I certainly wouldn't want to be the client penalized because I couldn't generate a good hash in a given amount of time.