Vin McLellan (vin@shore.net)
Tue, 23 Mar 1999 03:56:53 -0500
This is an intriguing first-person account of the first day of AES2,
the Rome Conference on the candidates to replace DES as the Advance
Encryption Algorithm (AES)that was posted to sci.crypt by Dianelos
Georgoudis, one of the designers of FROG, one of the AES prospects. Full of
interesting information and thought-provoking asides. The NIST website
offers most of the (often fascinating!) AES2 papers at:
<http://csrc.ncsl.nist.gov/encryption/aes/round1/conf2/aes2conf.htm>.
_Vin
PS. Unfortunately, there doesn't seem to be any further info about Ron
Rivest's RC6(a) cipher on NIST's AES2 website. According to Geogoudis,
RC6(a) is apparently a variation on his RC6 block cipher, another AES
candidate, that Rivest presented at the AES2 Conference. RC6(a) uses a
different key scheduler that requires only 25 bytes of RAM, but it is also
reported to be twice as fast as RC6 in assembler, and five times as fast in
C. There's nothing about RC6(a) on Rivest's home page at MIT or on the RSA
website.
-------------------------------------------------
Subject: Live from the Second AES Conference
Date: Tue, 23 Mar 1999 01:19:57 GMT
From: dianelos@tecapro.com
Organization: Deja News - The Leader in Internet Discussion
Newsgroups:sci.crypt
Here is my report on the first day of the AES2 in Rome. Lots of
participants, 180+ people from (according to NIST) 23 countries.
About half the speakers today were not native English speakers, so
this is indeed an international process. 28 papers were submitted
(all are at NIST's site: aes.nist.gov) and 21 will be presented
here.
First some general observations about today: A very full schedule
with 13 presentations and a rump session with another 15 or so
short themes, that was finally concluded at 9pm. Many surprises
and, frankly, some confusion right now. Almost all candidates
ciphers were stained one way or the other - and a surprising
strong showing of Rijndael (in the negative sense that very little
negative was said against it). Certainly many of the speakers were
frankly biased, but this is normal in a competition. Please keep
in mind that I am biased too. Here I express what I understood
from the proceedings as well as my personal opinions, both of
which may not be correct. Also, sometimes I only mention
approximate numbers. If you are really interested, read the papers
on NIST's site.
In the morning we had 8 presentations with surveys of the ciphers
- actually lots of measurements mainly about speed. Speed numbers
can be bewildering because there are so many parameters beyond the
cipher itself: hardware platform, OS, compiler, quality of
implementation, memory/speed trade-offs, key scheduling versus
encryption. Speed may be easy to measure, but I think its
significance is overrated (the advance of hardware technology will
double AES's effective speed every 18 months or so, and I don't
believe that world wide speed requirements will grow that fast
too). Even worse: different designers chose different margins of
security and an evaluation process that is obsessed with speed
would penalize those, who probably wisely, chose to be more
conservative. Thus, Eli Biham proposed a normalization of the
candidate algorithms depending on the number of rounds that are
known not to be secure. This view has also its detractors.
In the afternoon smart card implementations were discussed. First
efficiency estimates (both speed and space) were shown. After that
came three very interesting presentations about attacks against
smart card implementations. These are real world, practical
attacks. IBM's Pankaj Rohatgi explained how he got all 128 bits of
a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
card!
Now some more information about the presentations:
First, NIST's Jim Foti explained that they did a lot of speed
measurements on different platforms. On the reference platform the
fastest algorithms were Crypton, Rijndael, RC6, MARS, Twofish, and
E2. The algorithms with the fastest key scheduling are: Crypton,
Magenta, Safer+, E2, RC6, Rijndael. Dead last, by three orders of
magnitude, is Frog, which is my own candidate. (I wanted to have a
complex scheduler and on my Pentium it used "only" one hundredth
of a second, so I thought that was good enough!) There is much
change in ranking depending on the survey of speed, but in general
the "fast" algorithms are Crypton, Mars, RC6, Rijndael, Twofish
and Mars.
The next paper was by Brian Gladman, the Briton who singlehandedly
coded in C all 15 algorithms based only on their description, but
he was not present. His basic result is that for small, medium and
large blocks RC6, Rijndael, Mars, Twofish and Crypton are the
fastest.
Then came Bruce Schneier's presentation about speed. Again Bruce's
exposition was interesting and clear. He made a good point that
only assembly code speeds should be compared because otherwise you
measure a particular compiler's efficiency rather than the
cipher's. This is true, but in a networked world pure Java is
going to be used too as a platform independent solution. He
mentioned that the CPU also affects relative speeds, especially
with ciphers like RC6 and MARS that use data depending rotations
and multiplications. (RC6 in particular took some beating for its
relative inefficiency both with 8-bit smart cards and later,
surprisingly, with next generation CPUs.) Bruce mentioned that 8
bit processors with little RAM will stay with us forever, because
they will be used more and more for embedded systems. Less
convincingly, he claimed that also 32 bit processors will stay
with us for a long time. I think the prevalent view is that in the
future 64 bits CPUs and beyond will be prevalent for high-end
personal applications. His final ranking: Twofish, Rijndael,
Crypton, E2, Mars for Pentiums; Rijndael, Crypton, E2, RC6,
Twofish for hashing.
Then came SUN's Alan Folmsbee who compared ciphers from the Java
perspective. He used two platforms: Ultrasparc and "MicroJava" a
new processor that directly executes Java source code. He measured
"excess avalanche", RAM size, ROM size and speed. Excess avalanche
tries to measure how conservative the choice of number of rounds
is. According to his particular weights (and at the very least he
is considered impartial) the five best algorithms are: RC6, Mars,
Safer+, Serpent and Crypton. Guess who number six is? Frog.
Trouble is he did not even pretend to measure security as such and
only used the avalanche criterion - which is not supposed to be
synonymous with security. By the way, here Rijndael did not fare
very well being one of the slower algorithms.
Serve Vaudenay was another designer presenting a survey. His
recommendation is Mars, RC6, Serpent, ... and DFC. He criticized
NIST on the choice of ANSI-C for their reference platform. I tend
to agree: assembler and then Java for interoperability probably
makes more sense. His criteria were speed, transparency in the
design of the S-boxes (with some criticism for Serpent in that
respect) and simplicity. I liked his emphasis on simplicity in
algorithm design. He stated that a complicated (but insecure)
algorithm only increases the latency delay before somebody finds a
practical attack. His last criterion was "state of the art
research". HPC and Frog were deemed "empirical" with no reference
to any research results and were un-ceremoniously thrown off board
(I wonder what happened to Frog's splendid simplicity?) Deal and
DFC were judged to be based on a theoretical design, everybody
else on a mere heuristic design.
Then came Craig Clapp (who I believe participates in sci.crypt)
with a rather interesting paper on instruction level parallelism
of the candidates. He tried to measure what would happen in highly
parallel processors that are projected in the future. For this he
actually used a compiler that produces code for hypothetical
processors with between one and ten pipelines. As expected, with
hypothetical processors that are similar to current designs RC6,
Mars and Twofish came out best. With higher levels of parallelism
though, Rijndael and Crypton were faster. By the way, in these
measurements he used Biham's notion of ciphers with "normalized"
security, i.e. sometimes less rounds than the standard
implementation.
After that, Eli Biham made his case for normalizing the algorithms
to en equal level of security before making speed measurements. So
he cut down the number of rounds to an "insecure" number depending
either on their authors' estimates or on his own estimate - and
then added two rounds. By "insecure" here is meant that an attack
is known of roughly less complexity than exhaustive key search.
Now, of course, this is a lower limit; nobody really knows at what
number of rounds any of these ciphers become insecure in that
sense. Even so it is a good method because none of the candidates
should be allowed to be faster than that. If somebody finds a
better attack, then this would move up the minimum number of
rounds resulting in a slower cipher. Now, he was criticized for
adding two rounds instead of increasing the number of rounds by a
constant factor. Anyway, in his final analysis (surprise!) Serpent
becomes the fastest normalized cipher. With all that, Eli Biham's
experience in cryptography is really huge and his basic point is
undoubtedly sound. I think it useful to copy here his estimates:
originally now
Cipher rounds cycles minimal secure rounds cycles
Serpent 32 1800 17 956
Mars 32 1600 20 1000
Rijndael 10 1276 8 1021
Twofish 16 1254 14 1097
Crypton 12 1282 11 1175
E2 12 1808 10 1507
RC6 20 1436 21 1508
CAST-256 48 2088 40 1740
DES (with NIST API would be more or less here)
Safer+ 8 4424 7 3871
DFC 8 5874 9 6608
Deal 6 8950 10 14917
LOKI97 16 6376 38 15143
Magenta 6 23186 11 42508
Frog 8 2182 ?
HPC 8 2602 ?
(He does not try to estimate Frog's and HPC's minimal secure
rounds - a pity - but Wagner's paper on Frog estimates double the
number of rounds, which would put me in the 10th place)
Next, Jim Foti explained some preliminary results of NIST's round
1 randomness testing. For that battery of tests he used NIST's own
statistical tools, Crypt-XB (which I had never heard before) and
Diehard. In general everybody made this grade, even though they
were some quirks with RC6 and HPC that are interpreted as
statistical illusions while more testing is being done.
Next, thankfully, came lunch. One amazing piece of information I
picked at the table (while eating pasta al-dente and a literally
bleeding piece of meat). As it happens, I sat at the table with a
guy from the Federal Reserve. So I asked him if it was true that
every day more than a hundred billion dollars is transmitted by
wire. I explained that I had read this somewhere but that it
seemed an incredible amount. Well, he answered that every day more
than three trillion dollars are transferred by electronic means -
in other words, every five days an amount compared to USA's one
year economic output is translated into electrons.
After lunch, Prof. Koeune from Belgium, also a sometime
participant in sci.crypt, presented a paper on the implementation
of several AES candidates on smart cards. He chose two instances:
the low cost, 8-bit Intel 8051 and the advanced, 32-bit ARM with
1kB of RAM. The approximate results are as follows:
8051 ARM
Cipher RAM cycles cycles
E2 344 9K 2K
RC6 205 14K 1K
Rijndael 49 4K 2K
Twofish 49 ? 10K
For more and up to date information visit the page of cEASer
project: http://www.dice.ucl.ac.be/crypto/CAESAR/caesar.html
Geoffrey Keating presented a similar paper for the Motorola 6805,
also a 8-bit CPU. Best algorithms: Rijndael, Twofish, Crypton and
RC6. RC6 does need a lot of RAM (170-200 bytes) and later in the
evening River [ed- Rivest?] presented RC6a with a different key
scheduler that
requires only 25 bytes of RAM and also is twice as fast in
assembler and five times as fast in C - Rivest's middle name of
course is Merlin).
Then came three presentations about attacks against smart cards,
no doubt orchestrated by NIST so that each paper made an even more
powerful impression than the one before.
The first paper was presented by Adi Shamir. I don't know whether
Shamir is a teacher, but if he is then he must be a very good one
indeed. First he explained about Paul Kocher's power attacks
against smart cards (explained in Kocher's home page
www.cryptography.com). The basic idea is to observe how much power
a smart card consumes while executing a cipher (this can be done
down to the level of microcode!). So, every instruction has a
footprint and also the "Hamming weight" of the data processed
(i.e. the number of bits with value 1) affects the power
consumption as you need more power to charge up a register or
memory bit to one than to flush it down. So, if you write a value
with a lot of ones you will require more power. This attack, by
the way, is a noninvading attack. Say you use a smart card and its
reader has been doctored by an attacker and transmits details
about the power consumption. Now Shamir showed that it is not
necessary to have known texts encrypted. By using data from many
different encryptions (different data but with the same key) and
comparing their power graphs you will find that there are places
where the power consumption strongly correlates. These are the
times when the cipher is doing work that is not dependent on the
variable data streams - i.e. when the cipher is doing its key
scheduling. The technique explained is slightly more complex, but
the point made was that DES could easily (actually especially
easily) be broken in this way. On the other hand, Skipjack, a
modern cipher produced by the US government, would be especially
hard to attack in this way.
Next, Joan Daemen, one of the authors of Rijndael, presented
another comparative study about attacks against smart cards. He
differentiated between timing attacks (useful mainly against older
implementations of public key smart cards), power analysis
attacks, and Differential Power Analysis (DPA) attacks. The latter
is based on the fact that for some instructions the average power
consumption correlates with the value of one input bit. As
possible defense mechanisms he mentioned artificially produced
noise but this can be cancelled out using a larger sample of
cases. Another possibility is "balancing", an expensive
proposition where, for example, you use 4 bit arithmetic on an 8
bit smart card, taking care that the other 4 bits are always the
complement of the "true" 4 bits, i.e. maintaining a constant
Hamming weight. According to his analysis the algorithms which are
easiest to protect against such attacks are Crypton, DEAL,
Magenta, Rijndael and Serpent. The worse would be CAST-256, DFC,
E2, HPC, Mars and RC6. His conclusion was that these kind of
implementation attacks are really much more relevant than the
academic attacks often discussed, because they can be implemented
in practice and do some real harm.
This point was made dramatically clear by the next speaker, IBM's
Pankaj Rohatgi, who actually implemented the Twofish 6805
reference code on a ST16 smart card and was able to recover the
full 128-bit secret key using only about 50 independent block
encryptions. He showed how noise can be cancelled out in these
attacks and how he guessed the bits of the whitening. Whitening
XORs data bits with constant key bits, ie: D <- D xor K. Using DPA
it can be found out if D retains its value (in which case K=0) or
is inverted (in which case K=1). In this way all of the whitening
keys' bits can be found. In the case of Twofish the recovery of
the whitening key allows the definition of a small number (10^6)
of candidate master keys, from which the correct key can be
derived. He surveyed all 15 candidates declaring them all unfit.
The easiest to attack were judged to be Crypton, Deal, Loki-97,
Magenta, Rijndael, Safer+, Serpent and Twofish, where DPA needs to
be done only on very few rounds. Slightly harder would be ciphers
like CAST-256, DFC, E2, Mars and RC6. Hardest to attack would be
Frog and HPC, but only because they employ large key dependent
tables - which make them more difficult to implement in smart
cards in the first place. The moral here may be that availability
of more RAM can make a smart cart more secure. He mentioned some
possible defenses that do not work: adding noise or reordering the
sequence of operations. He made clear that even balancing would
not work, because this by definition is a technique where two
physically separate parts of hardware are used; these parts can
never be identical, so a correlation of bit values can always be
determined. He rather cryptically mentioned that secret sharing
techniques where one bit is split in several values might have an
application here. I asked him whether a capacitor build in the
smart card and powerful enough to charge several instructions
wouldn't effectively hide the detailed timing of the power
consumption. He said that a capacitor can be easily disabled, but
that this defense might work in an noninvading threat model (this
is the most dangerous kind where a smart-card's key can be stolen
while the user is using the card in the normal manner).
After that came the rump session. A few highlights:
Doug Whiting described some efficiency estimates on Merced
(supposedly a secret future Intel CPU). Here are the results:
cipher Pentium II Merced
RC6 250 620
Rijndael 283 170
Serpent 900 800
Twofish 258 230
RC6 (as mentioned before) actually seems to get worse. The same is
estimated will happen with Mars too.
Takeshi Shimoyama claimed that some S-boxes of Serpent were not
secure against high-order differential cryptanalysis. His
presentation was laced with a lot of jokes and laughter and I
cannot judge how serious (if at all) this a problem is for
Serpent.
David Wagner showed that there is a large set of equivalent keys
in HPC that can be found just by flipping 1 bit of the master key
and then keep trying for about 500 times. This would mean that HPC
is not a good algorithm for implementing hashes. Wagner, by the
way, will present tomorrow the paper about weak keys in Frog.
Rivest presented RC6a with the much more efficient key scheduler
variant (see above). He claimed that 8-bit chips "will go away",
but nobody really believed that he really believed it.
Bruce Schneier made a case against the idea of choosing not one
but several "approved" AES algorithms. He used the argument that
cryptanalytic efforts should not be divided between several
ciphers because then each individual cipher would be less trusted.
I didn't quite understand this argument: all ciphers that make it
to the second round will be strongly analyzed anyway no matter
whether only one or more than one is finally declared the winner.
He mentioned that if they were two AES ciphers and you used one
bit of the key to chose one of them, then half the time you would
use a less powerful cipher. But one could also say that half the
time one would use the more powerful cipher. (Say, after the
second round cipher A is deemed the strongest followed by cipher
B. Now suppose NIST chooses both as a standard and somebody in the
future finds an attack that works better against A than against B,
which would make B stronger than A). Also if there should be a
catastrophic failure with cipher A (a non zero probability),
cipher B could be easily substituted just by fixing this one bit
in the key. He argued that multiple standard ciphers would be much
more expensive to implement. This is probably true but I think
this measurable cost should be balanced against the risk (a
non-zero probability) that a single AES may fail catastrophically
some time in the future when it is already broadly used. If that
should happen the cost would be extremely large. The possibility
of a catastrophic failure is really what should keep the NIST
people awake at night.
One wonders what is so wrong in declaring several good algorithms
for specialized environments. Actually, after today's presentation
about attacks against smart cards, somebody mentioned that a smart
card is a sufficiently different environment to justify a
specialized cipher. There is a large class of applications where
ciphers are implemented in software, where speed is not important,
and where security is important. In these cases one might choose
to combine several algorithms in series at no particular additional
cost.
So who will make it to the second round? No idea.
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
-----
Vin McLellan + The Privacy Guild + <vin@shore.net>
53 Nichols St., Chelsea, MA 02150 USA <617> 884-5548
-- <@><@> --
The following archive was created by hippie-mail 7.98617-22 on Sat Apr 10 1999 - 01:18:50