Subject: Re: macros vs HOFs (was: O'Caml)
From: Erik Naggum <erik@naggum.no>
Date: 14 Sep 2002 01:24:20 +0000
Newsgroups: comp.lang.lisp
Message-ID: <3240955460519515@naggum.no>

* Bruce Hoult <bruce@hoult.org>
| I worked in stockbroking and finance companies for a decade, writing
| software dealing with stock and FX and government stock calculations.  I
| never had any trouble meeting specs, using FP.

  I have worked with Oslo Stock Exchange and related businesses since 1990,
  when I specified the protocol between the exchange and brokers' computer
  systems. and I have designed and specified several protocol since then.  We
  had several problems with software vendors who used floating-point to store
  numeric values.  It was, even back then, a well-established fact that people
  ran into problems when they chose to use floating-point for numeric values
  that were not, in fact, floating point, but fix-point.

| You're clearly outside your area of expertise here.

  Your competence in assessing mine is also remarkably lacking.  That you keep
  on fighting is even more puzzling.  I think you should try to talk to someone
  who still cares about you about your incessant desire to make a fool of huge
  yourself when you are simply /factually/ wrong about something.

  How do you know that you have been meeting specs with floating point?  I
  think you are the kind of programmer who makes things work.  I am the kind
  of programmer who makes sure things do not fail.  What "works" for you is
  not even relevant to me.  There are sufficient problems with floating-point
  that it cannot be used in software that has to be exactly right all the time.
   It does not matter that you can detect when you are no longer exact, because
   you have to do something when that happens to become exact, again.  You
  could give up when you run out of precision in your floating-point format,
  but that is generally not an acceptable option.  So you have to a Plan B
  when this happens.  There may be good reasons to work with a Plan A and a
  Plan B, but during my long carreer as a programmer, I have seen one thing
  again and again that makes me desire not to rely on Plan B: It is almost
  never debugged properly because it is not expected to be used.  This is in
  sharp contrast to military-style contingency planning, where you rely on
  your contingency plan to be failsafe when your primary plan may fail.  I am
  not a fully trained paranoid (i.e., lawyer), but I believe that understanding
  the need for and nature of contingency planning is a requirement for anyone
  who teaches planning, and that is, in effect, what programmers teach their
  computers.

  By the way, if you want 53-bit integers with double-precision floating-point,
  why not go for the full 64-bit integers that you get even on 32-bit machines
  with C99's new `long long int´ type?  Or you could use the 80-bit floating-
  point that is used in the Intel Architecture.  Or perhaps the MMX registers.
  However, I would expect an implementation of a bignum library to make the
  most of the hardware.  If the implementation is not strong on bignus, that
  is, somewhat ironically, because bignums are also Plan B for most systems.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.