[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: when should sends be virtual?
Rainer...
When I participate in e-mail dscussions like this, I am aware of a
spectrum of choices. The bombastic assertion is at one end and the
analytic, balanced discussion at the other. I thought last time I would
try the bombastic assertion just to see what happened. I think I blew
it.
Anyway, all the observations you make (well, those I understand) about
the situation are true, and each problem will bite you to various
degrees. I think the only way to tell how bad each problem is is to try
programming in Self for a good spell, then make up your mind.
Here are some commnets based on my experience. Of course I probably
have my biaes...
> then you defined flexibility as possibilities for the client (child) to
> change the behavior of the server (ancestor). (would YOU want to allow your
> child to alter your behavior? ;-)
I understand what you are saying, but in practice, a child very rarely
changes the behavior of a parent in Self. Even though most of the code
we run in Self is borrowed from an object for use in its decendants,
the code-donor parent does not get its behavior changed when its code is
used by the child, unless there is state in the parent that can be
changed.
This is in practice possible, and in fact when you have class-variable
like slot usage in Self, this does happen. Class-variables tend to be
few and I guess their semantics must be fairly well understood, because
we have not had a frequent problem of children accidentally corrupting
class variables.
We *have* had occassional problems of "corrupted prototypes" which is
somewhat different but related to what you are saying.
> ...another definition might be
> freedom to write code without breaking existing code.
We do have to figure out how to debug our code and that is dificult when
you are inheriting much other code.
Again, it is very rare for some object to actually break a parent or a
sibling.
>
> when all sends have virtual semantics, the author of a descendant has to
> take care to not accidentally override ANY of the possibly large number of
> selectors that its ancestors implement and use (she may of course
> deliberately do so).
>
Actually, I can understand why you say this, but in practice name space
collisions do not turn out to be a major problem for some reason.
Perhaps it is because short names are well known and therefore avoided, and
obscure functionality tends to get very verbose names, so colision
probablity is very low.
Perhaps other Self programmers would report a different experience...
> somtimes flexibility adds to utility, sometimes it doesn't (see assembler
> or c). the protocol supported by a parent (and potentially overridden by
> the child) is not obvious. you have to use some tool to keep track of
> that, this again adds to development tool fatware. and it lowers
> productivity, because the programmer has to do something (check for name
> clashes) when he wants to do nothing (not override anything).
>
as stated above, no extra tools seem to be needed, but this is only
based on my own opinion of my own experience. Let's put it this way..we
are always adding new gizmos to make programming easier and
more fun in Self, but none of them have had to do with this unintention
name reuse problem.
> the programmer knows if she uses existing (ancestor) protocol or demands
> `most specialized' (descendant) protocol. to really rely on this
> knowledge, the two variants must be explicitly coded. as i see it,
> ancestor responsibility is more common than descendant responsibility.
>
> (anyone has any statistics on this question?
> i strongly suspect that this depends on the
> language being used and previous experience
> with other languages.)
>
> therefore it'd be more convenient to mark virtual sends than
> non-virtual ones.
>
I don't really understand the above point...
>
> > ... Why would you want to say
> > to a child "you cannot specialize the use of `foo' in this method, even
> > if you want to"? ...
>
> because of reliability reasons, of course.
> ("in a delegation chain, any object may screw up ancestors' semantics."
> AND
> "objects in a delegation chain may have been implemented by different
> programmers"
> AND
> "in every team of programmers, there's at least one bad programmer"
> IMPLIES
> ... well, this is of course NOT a valid chain of inference, but you see
> the point :)
>
I can only tell you in my own experience, screwing up ancestors is not a
major problem in Self, as I have stated before. Having different
programmers IS a problem, because you have to understand what they mean
in order to use their code. Inheriting from an object that was designed
by someone else is undoubtedly even more difficult. I'm not sure it is
really worse in Self, since you only normally override a few things in
any language, including Self.
Bad programmers might make this worse, but I am lucky enough to be working
with the best! ;-) (politically smart comment there, eh?)
>
> > If we could trust the method designer to know best and for all time that
> > something should never be specialized, then using non-virtuals by
> > default might make sense. My belief is that such trust is unwise.
>
> since it is clear that "knowing best and for all" is next to impossible
> that kind of trust would indeed be unwise. but trust IS necessary for
> effective programming and unless the programmer explicitly states his
> intentions, there is nothing to trust in (s.a.).
>
I'm missing the point there. Maybe you want extra declarative semantics
of some sort?
>
> > ... Most
> > created things can satisfy more than their creator's original goals.
>
> true.
>
>
> another aspect of virtual methods being the default is speed. considering
> this, i don't understand why it is as it is when one of the original design
> goals was speed.
Speed was definitely NOT one of the original design goals ! In fact,
the impossibility of using an interpreter to get any reasonable
performance out of Self semantics was the motivation for novel
implementation techniques.
> (question: how efficient are resends? less than
> self-sends when self is the holder?)
>
You have to ask one of the clever implementation type guys.
>
> so far, i'm convinced virtual sends should not be the default.
>
Bummer! You might like something like Beta, which lets the ancestor
designer specify when to run code in a child by use of "inner."
Perhaps my bias is towards rapid prototyping. When I compare with my
years of Smalltalk experience, which does not allow you to override
instance variable assignment, I feel more productive in Self. It is a
simple matter of greater flexibility. It is noticably easier to change
my mind about a design.
The corrupted prototype problem does occur in Self and not Smalltalk. We
are working on ways to mitigate against that. Even so, it is not a major
problem, but significant enough to merit attention.
Greater use of Self over longer periods might show up other problems.
We currently are working without privacy semantics, for example. As in
the simplest, "pure" Self. This may turn out to create awkwardly "wide"
interfaces that are difficult to maintain. However, I can't say it has
been a major problem yet, perhaps because we have hints in the
programming environment about intented use. But as time goes by this
problem or others could appear(?).
--Randy