Forwarded from sci.virtual-world's monthly discussion (it isn't old, so you
may jump in on the thread if you want to):
------------------------ cut here ------------------------------
Subject:
VR/archives/meow?group+ March topic -- consistency
Date:
Fri, 26 Mar 1999 16:55:58 +0000
From:
Chris Greenhalgh <cmg@CS.NOTT.AC.UK>
Organization:
University of Washington
Newsgroups:
sci.virtual-worlds
On the consistency theme I'll throw in my contribution. I'm afraid
it's more of a soapbox or lecture that a discussion :-)
> This month we'll be discussing the issues involved in maintaining a
> consistent representation of a virtual world. Clearly, any change
> that gets made by any user must be visible to all the others.
CVE SPECIFICS
CVEs are different from traditional database application for at least
some of the following reasons:
* There are multiple humans-in-the-loop, which in turn means...
* Timeliness (for interaction) is very important.
* Indeed, timeliness can be more important than correctness.
* If the user can be made aware of concurrent activity then they are
able to modify their activity accordingly (traditionally databases aim
to HIDE the effects of concurrent use/users).
* In addition, virtual worlds typically have a notion of continuous
activities or behaviours (e.g. a ball in flight) whereas more
traditional consistency approaches assume discrete transitions between
states.
As an aside, you can only do continuous behaviours perfectly
consistently in a CVE if you synchronise the communication and frame
updates of every participant, and even then you really need perfect
motion blurring in the rendering! Time-parameterised states are not
enough, because their initiation is itself time-linked, and can in
general depend on infinitessimal and non-deterministic changes in
other entities (did the bullet him me or not? who's wants to know...).
* As has already been noted, some forms of "inconsistency" can
actually be desired in particular applications ("subjective views",
awareness management, etc.).
Some work in CSCW and Groupware includes the human considerations (and
has given rise to interesting work on appropriate consistency schemes,
especially optimistic schemes), but that doesn't have to deal with
temporally-continuous behaviours. I think that gives us a unique
problem.
GENERAL OBSERVATIONS
Most shared VEs rely on some form of data replication to support
multiple views. E.g. each viewing application has a (partial) copy of
the scene graph or entity database (or whatever the system uses).
If we consider the several copies of a virtual entity in the different
viewing applications then there are quite a wide range of approaches
to dealing with changes to them, including:
* a Distributed Shared Memory model - anyone can update, anyone can
read, and the distribution system imposes a well defined ordering with
regard to potential write/read and write/write conflicts. This can be
supplemented by "hints" about locking/unlocking data to make a more
efficient implementation (see the Munin & Midway DSM systems). The
locking hints can make it a really nice system (though it feels more
and more like single transferable ownership).
* a single non-transferable ownership model - only one process can
update the object. The rest just watch the changes. If you want to
change it you have to ask that one owner to do it. This is typically
equivalent to a source-ordered event-based model (see next point).
DIS and HLA spend a lot of their time looking like this (even if, in
principle, you could transfer ownership). It makes everything a nice,
simple publish/subscribe model. I did this in MASSIVE-2; its good for
user embodiments, but not so good for directly manipulating objects in
the world.
* a single transferable ownership model - only one process AT A TIME
can update the object, but this right can be passed between processes.
This is typically equivalent to a causally-ordered event-based model
(see next point). This is perhaps the commonest in my experience of
CVEs, and is a nice compromise, but it still doesn't do continuous
time, and it can't do perfect and timely physical modelling. But
that's life, and the speed of light limit for you ;-) I'm doing this
in HIVEK/MASSIVE-3; fine for embodiments, and if people take turns
doing things, but not so good for a tug of war...
* a totally-ordered event-based model - rather than changing the data
directly (in principle, at least), the process emits an event which
requests that it be changed. These requests are given a total order
(either via a centralised process, or through a distributed
vector-clock protocol) and applied to the database at each process
only when their position in the total order has arrived. Observe that
this reduces interactional consistency (a process has to wait for its
updates to be ordered before it can change the database). I've seen
this done for systems that started out (philosophically, at least)
non-distributed; they can have built-in assumptions of consistency
that are hard to meet in other ways. In my opinion, this limits them
to high-speed (normally LAN) deployment.
* distributed transactions - transferable ownership with bells on. If
you need it you need it, but otherwise the cost can be high, IMHO.
All of the above approaches are "conservative", i.e. they prevent
(some type of) discrete inconsistency arising.
There are a range of optimistic or semi-optimistic approaches, which
allow temporary inconsistencies to arise, and "fix" them afterwards,
to give "eventual" or "steady-state" consistency. These include:
* try/confirm/rollback - just do it, and later abandon it if it
"shouldn't have happened". There's a paper in VR'99 about using
addition user interface cues to make this more comprehensible. I think
this is really nice (and also includes what can be viewed as a dash of
operation transformation - below).
* undo/redo - merge, by going back, doing the thing that you just
found out about, and then get up to date again.
* operation transformation (from Groupware work) - merge, by changing
what you just found out about as if it had just happended and then
doing it.
Continuous time consistency is even harder. The choices are:
* lock-step simulation (you can't rollback people's brains). Not for
wide area use...
* virtual relativity/perception filtering (U of Reading,
VRAIS'98). Very interesting and a little surreal. Although, strictly,
this requires a synchronous and perfectly reliable network which
pretty much no-one has...
* temporal hints/suggestions to supplement what is really discrete
consistency (e.g. soonest delivery time for predicted events in PaRADE
- Reading, again). I'm doing some of this in collaboration with
Reading, though I don't any practical applications yet.
* other temporally-oriented optimisations, like predictive ownership
transfer, or predicted events. Also something I'm playing with (again
with Reading), and which I can usefully use. E.g. when the user's
mouse pauses over an object they may be about to pick it up, so the
system requests ownership (in HIVEK/MASSIVE-3).
* ignore it. Much easier.
> What's
> more, the changes must endure even after the users have left.
Left as an exercise for the reader.
Apologies for the length, but consistency means many things to many
people...
Dr Chris Greenhalgh
Communications Research Group,
School of Computer Science and Information Technology,
University of Nottingham, UK.
http://www.crg.cs.nott.ac.uk/~cmg/
------------------------ cut here ------------------------------
--
Ola Fosheim Groestad,Norway
http://www.stud.ifi.uio.no/~olag/