[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: copy-on-write (was: globals considered harmful)



Alexis Layton (alex@XAIT.Xerox.COM) wrote:
> This present discussion dovetails with something I have been thinking about
> exploring for some time -- the idea is to have a stack or tree of contexts
> that provide essentially copy-on-write semantics on objects; this would allow
> one to do exploratory computations and then unwind any object changes.
> OO meets AI?
> 
> (The actual idea is to provide the appearance of copy-on-write in a hopefully
> efficient way.  Using surrogate objects and parent pointers can go a long
> way.  The hard part is efficiently dealing with object dispatch in a given
> context, I would hazard.)
> 
> This idea is kind of nebuluous and there are issues in how one would determine
> how to remember (or define) the "result" of the computation so it too is
> not unwound.

Forgive me if I have misunderstood your idea, or the underlying SELF 
concepts (though a major SELF enthusiast, I have yet to write a line
of code), but "copy--on-write" rang a big bell for me.  Several years
ago I wrote a letter to self-interest on this selfsame topic, but lost
my email access the day after posting it so never got to see whether
there were any replies.

I have always been disturbed by the distinction between traits and
prototypes, which (as I have understood it) create a major bifurcation
between the heritability of methods and data.  When a new object is
created, all of its data slots are copied from the prototype, and thus
will from that point on no longer reflect changes made to the prototype
itself, while slots from the traits object underlying the proto will
not be copied and thus post-creation traits changes will be reflected
in the behaviour of the object unless those slots are specifically
added and redefined in the object itself.  Am I correct here so far?

I think there are definite advantages to unifying this behaviour by
providing a mechanism for copy-on-write on a per-slot basis.  For
example, in a word-processor I might want all paragraphs to appear in
a default typeface unless specifically overridden by the user; if the
default is changed all paragraphs which have not been so overridden
should reflect that change; and a paragraph-object with an overridden
"typeface" ("style", whatever) slot should be revertable to the default
behaviour by deleting the slot.  I think that is what Alexis was getting at.

An application program can of course easily accomplished this behaviour
with the features SELF currently has.  Using reflectivity, a copy method
can be defined which omits all or a subset of : slots, and the prototype's
"slotname:" method could be extended to install a normal ":" slot in the
target object (unless it were the true target, in which case the normal
value-definition behaviour would still apply).

The problem with this is of course primarily one of efficiency: Every time
a slot was added to an object, a new form would be created.  Because the
reference-counts of these forms would be 1, there would only be a single
form per modified object, but that is bad enough.  But unless I have grossly
misinterpreted the documentation or there have been subsequent changes, there
would not be any factoring of these newly-created forms; if 10 otherwise
identical objects had the same data-slot automatically created, there would
be 10 identical forms.

So I would advocate compiler-support for form-factoring to assist with
copy-on-write behaviour.  I haven't done any real analysis on what this
would take, but clearly there are some complications, as one would want
objects which had slots automatically added in an entirely different order
to still be factorable by the system, implying some tree searching.

There might be other solutions to the dynamic-prototype problem which
trade off form memory utilization (and factoring complexity) for object
memory utilization.  For example, if there were a reserved value which
could be installed in data retrieval- slots that causes special action
to be taken: the request would be passed up the parent* tree until a proto
containing a real value were found (I assume slot-cacheing would deal with
the efficiency problems for oft-accessed values); once a normal value were
stored in the slot with a : method this behaviour would of course go away
for that slot.  An object-definer could choose between dynamic-prototype
and static-prototype behaviour on a per-slot basis by creating an object
from a prototype and redefining some of the slots to have this special
"deferral" value; other slots would of course statically assume the same
values as the proto.  When this "deferred prototype" object  were used to
clone new objects, the "deferral values" would of course be copied along
with all the other value slots.  And obviously one could create a number
of these deferred prototypes, with different assortments of dynamic- and
static- value-slots, to obtain different behaviours.

And of course there might be value if *both* mechanisms were implemented,
in a world where memory resources are finite, since each addresses the
problem with different efficiency tradeoffs.  I can imagine situations
where most slots were not explicitly redefined, and true copy-on-write
would save heaps of memory if forms were intelligently factored, vs.
cases where few of the deferral values would remain.

If I've reinvented the wheel or incorporated some major misunderstandings
of what's going on, feel free to hack away mercilessly - I'm in "learning
by blundering" mode.

- J.J. Larrea (jjl@panix.com)