[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Strange Things...(long, technical, possibly pointless :-)




   [build a someMessage object by combining lots of little parents]

   In this way I would only need to compile the various message handlers
   once: they could be shared. (Another option would be to use _AddSlots
   to just copy them into the someMessages object). However, this would
   require another level of inheritance for the allMessages object.

No, the code would actually NOT be shared!  The source code is shared,
of course, but the compiler would still generate code for each
particular receiver, i.e. each someMessages object.

   What I would especially like to know is the "best" way of arranging
   this in terms of speed and space (hints on how to dynamically compile
   7000 objects from consVectors in less than three-quaters of a hour
   would also sbe appreciated). Last time I tried something this low
   level I spend a week timing out various options(*): this time I thought
   I'd ask *First*.

Good question :-)  In general, all of the following will be slow:

  - huge objects (7000 slots...)
  - frequent programming operations
  - megamorphic sends (sending "isString" to every object in the
    world)

Everything else is fast :-)  Usually :-D


   (*) PS: I don't suppose there is a way to tell the compiler not to
   optimise a particular object (rather than globally). 

No, there isn't.  But we're working on making the system a little bit
smarter to prevent "overcustomization".

   For example, I
   have encapsulated dictionaries which do a lot of _Defines and _Mirror
   removeAllSlots. (The timings were to figure out when it was quicker to
   add slots to an empty prototype then replace the whole object with
   define vs removing all slots and replacing then a slot at a time)
   Cloning an encapsulated object requires building an unencapsulated
   clone: dictionaries clone themselves on expansion. With the system
   taking up to 700ms per clone, the encapsulated dictionary benchmarks
   were looking _very_ bad.

Programming operations that change the size of an object (i.e.
add/remove assignable slots rather than constant slots) are always
slow because when the object grows, the system has to copy it and
update all references to point to the new version.  This "one-way"
become is expensive - you have to scan the entire heap for references
to the old version of the object.  (The "become" wouldn't be necessary
when the object shrinks, but we currently don't optimize that.)

Hope that helps,

-Urs