Subject: Re: Big time difference between (unix) find and DIRECTORY : why?
From: (Rob Warnock)
Date: Thu, 25 Aug 2005 04:38:27 -0500
Newsgroups: comp.lang.lisp
Message-ID: <>
Marcin 'Qrczak' Kowalczyk  <> wrote:
| Ulrich Hobelmann <> writes:
| > I called them micro-uses because it's basically just aliasing, and
| > not putting files under different categories / dir hierarchies.
| Gnus uses hardlinks to store crossposted articles.

A small company I worked for long, long ago -- so long ago there were
no symlinks! [yes, we were using Unix v.7] -- used massive numbers of
hardlinks to implement a kind of version control and recompilation
optimizer for our product builds [which included the Unix kernel].
We modified *all* the editors people used ["ed", "vi", Rand "e"]
to move files out of the way before writing to them, and we also
modified *all* the Unix Makefiles to remove targets before writing
to them [but only if it really was going to write to one]. Then we
modified "cp" [later renamed "cpt"] to add a "-l" option that said
"make links if possible" [hardlinks, as noted above]. This meant that
all it took to clone a new version of the kernel source tree, make
some tweaks to a few source files, then build a new kernel was this:

	$ mkdir -p my_workspace/src
	$ cpt -rl /kernel/src/{version} . my_workspace/src
	$ cd my_workspace/src
	$ ...edit stuff..
	$ make

Since the "cpt" did hardlinks, all of the object files would be
preserved for source files that hadn't changed, and *huge* amounts
of compile time and disk space would be saved. And because of the
Makefile changes to pre-remove targets, your "make" wouldn't change
anyone else's already-compiled object files.

Integration of multiple peoples' changes into the {version+1} snapshot
wasn't too bad, except where multiple people had changed the same files.
But N-way merges are a pain no matter how you do them...


Rob Warnock			<>
627 26th Avenue			<URL:>
San Mateo, CA 94403		(650)572-2607