Anonymous (nobody@replay.com)
Wed, 8 Jul 1998 23:23:07 +0200
After sacrificing two messages to the remailer net, the gods of Replay are
still not appeased; neither message has reached CodherPlunks. Here goes a
third try...
This method minimizes the impact of subtly-biased distillation routines
(because they're not used to generate the key) and artificially-introduced
particles (because it's harder to manipulate the hash), and might even be
sufficiently conservative to appease some of the hecklers of the list.
1 / Analyze the hardware device used to generate raw data to figure out
what biases, obvious and subtle, its unfiltered output would contain.
2 / Using those results, design a simple distillation routine to have no
biases complex enough to interfere with entropy estimation (no bit
interdependencies, no offset dependency, etc.).
3 / Do the counting/math to figure out how much entropy is in the
distilled data.
4 / Use the time spent collecting data and amount of entropy in the
distilled output to figure out how much data collection would be required
to produce _at_least_ one bit of entropy. This should produce a good lower
bound on the rate at which hashing would collect entropy (I say lower
bound because many distillation methods -- "the lowest few bits" et cetra
-- throw out lots of entropy that hashing doesn't)
Now, whenever you want randomness...
5 / Use step 4's time:entropy ratio, the key length, and any other
relevant _non-secret_ information to figure out how long you need to
collect entropy, then double the figure to compensate for a varying rate
of entropy collection. Then collect the raw, unfiltered data and hash it.
The following archive was created by hippie-mail 7.98617-22 on Fri Aug 21 1998 - 17:20:13 ADT