[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Self UI
I found the paper "Experiencing Self Objects" by B. Chang and D. Unger on the
SELF user interface a refreshing departure from the current GUI clone wars.
I had some ideas for a Self interface, but they are much more conventional.
Every Self object would be an icon on the screen. You could send them messages
by placing buttons on them and "pressing" the buttons ( in ARK style ). You
would edit the objects by creating views on them. These views would have view-
traits and the object as parents, so they would really be extensions of the
object they represent. Sending a message to a view would send it to the object
( unless view-traits or the view itself implemented it ). There are no LOAD/SAVE
semantics in this model as the view is tied to a single object during it's
lifetime - so it is a little more concrete than a conventional tools model.
I keep wondering how you can have traditional editing capabilities in Self
Artificial Reality. Maybe I am missing the point entirely and you don't need
editors, is that it?
I think that editors could be added to the current UI without spoiling the
object's single indentity if we could "look under" the objects. When you
clicked on an object it would flip over ( in 3D space :) and reveal an entirely
different look-and-feel. If you flipped it over once more you might see yet
another appearance and behavior ( another editor ). Just a few more turns through
the object's many faces and you would be back to the good old boxes and arrows face.
If done properly, the result would be a single, solid object in the user's mind.
When I design integrated circuits I like to open multiple coordinated windows
on the same layout showing different parts at the same time. This would be
impossible in the last paragraph's model. Must I give up this feature?
- Jecel - University of Sao Paulo - Brazil