Re: Cleanup for cryptographic algorithms vs. compiler optimizations

From: Bernd Walter <ticso_at_cicely7.cicely.de>
Date: Sun, 13 Jun 2010 23:35:12 +0200
On Mon, Jun 14, 2010 at 02:20:26AM +1000, Andrew Milton wrote:
> +-------[ Bernd Walter ]----------------------
> | On Sun, Jun 13, 2010 at 05:44:29PM +0200, Dag-Erling Smørgrav wrote:
> | > Bernd Walter <ticso_at_cicely7.cicely.de> writes:
> | > > Amazing - this is one of the things which can get nasty if you try some
> | > > kind of microtuning.
> | > 
> | > Only if you break the rules.  Bad code is always bad, even if it
> | > sometimes works by accident.
> | 
> | To expect that function calls are replaced with other functions isn't a
> | very obvious rule.
> 
> Don't turn on compiler optimisation then. You're explicitly telling
> the compiler to make your code better/faster/smaller. Optimisation
> flags always come with the caveat that your code may not work exactly 
> the same...

This isn't helpfull at all.
Yes - we can disable everything and have nothing after all including
nothing unexpected.
In Germany we say "Kind mit dem Bade auschütten".
With other words you throw all the good things away just to catch a
single bad point.
Optimization isn't bad as such, but it is influencing more and more
things, which could be considered save before.
It is important to know which parts can be considered save in future
and how it can be ensured.
And don't tell me in the early/mid 90th anyone has ever expected
compilers to optimize at the same level they do today.
LTO was a very interesting new area because before that compilers never
touched anything outside it's own scope.
printf => puts isn't that amazing if you consider printf to be a puts
consumer which is just shrink to nothing, but I understood Dags point
that this was not the end of function exchange to expect.
Volatile is also a very strong mechanism and often it is enough to
just cast a variable volatile in a specific context as done in the
macro of this thread - instead of just defining it volatile for
every use as was suggested as well.
Compilier optimization is even required in many cases to get decent
performance or in small environments to get it small enough to fit
into memory at all, but this doesn't mean it is what you want in every
case.
The wish to wipe out sensitive data in crypto code is very reasonable,
but this doesn't mean disabling optimzation is the way to go.
So far there isn't a save solution to use memset to wipe out a whole
block of memory.

Go back to the originating mail.
Crypto code wasn't aware of this problem and this is a way more
obviuous optimization than function exchange.
And I do believe that the programmers were clever people.
Alarming, isn't it?
Maybe paranoid users might consider compiling their OS with -O0, but
I don't think this is the right way.
It is amazing how strong the influence of optimization is and how weak
the programmers assumptions are.

The thread is about how to handle optimization in corner cases and not
to disable it.

-- 
B.Walter <bernd_at_bwct.de> http://www.bwct.de
Modbus/TCP Ethernet I/O Baugruppen, ARM basierte FreeBSD Rechner uvm.
Received on Sun Jun 13 2010 - 19:35:29 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:04 UTC