Subject: Re: Short question: the letter n
From: rpw3@rigden.engr.sgi.com (Rob Warnock)
Date: 2000/04/27
Newsgroups: comp.lang.lisp
Message-ID: <8e98ot$44pp0$1@fido.engr.sgi.com>
Erik Naggum  <erik@naggum.no> wrote:
+---------------
| * "Scott L. Burson" <SBurson@my-dejanews.com>
| | This doesn't really make sense, because REVERSE, and (I think, off
| | the cuff) most of the other consing versions of these operations
| | also run in linear time -- it's just a much longer linear time
| | because of the cost of first creating and later GC-ing the new conses.
| 
|   you're quite mistaken on this, too.  the non-consing versions do not
|   (necessarily) run faster for a number of really hairy reasons.  they
|   don't cons.  that's all.  GC'ing dead objects doesn't take time in a
|   whole bunch of GC implementations -- e.g., it takes time to keep
|   objects _alive_ from generation to generation in a generational GC.
+---------------

Not only that, but doing an NREVERSE of "old" data with a generational GC
can actually be *slower* than REVERSE, since in the NREVERSE case every
"old" cons that is altered must be [well, may need to be] recorded in the
"remembered set" for that generation (so that the next minor collection
will work correctly).

Fortunately, most write barriers do check for the "new" data being from
the same generation as the "old" data, and if so, suppress the recording
overhead. But the point stands: With generational GCs, "destructive/in-place"
operations *can* be more expensive that the corresponding "fresh-allocating"
versions of those same operations.


-Rob

-----
Rob Warnock, 41L-955		rpw3@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		PP-ASEL-IA
Mountain View, CA  94043