On Sun, 1 Mar 1998 14:58:28 PST8PDT
Stephen Zepp<zoran@enid.com> wrote:
> Adam Wiggins wrote:
>> [Chris Gray:]
>>> [Vadim Tkachenko:]
> [snipped comments on complications]
>> No, the server doesn't process anything it hasn't received. We're
>> assuming here that the server is considered, for all purposes, 100%
>> correct all the time. (At least, if we lay aside my crazy ideas
>> about packet insertation into the time line.)
>>
>> The clients do all the same processing as the server, the
>> difference is that they only have a small subset of the total
>> objects from the server game world in memory to process through -
>> those directly nearby the player. They may determine that a
>> collision has occurred between two ships and cause sparking,
>> "collision imminent" warnings to the pilots, etc - but will not
>> destroy any ships or display any actual collisions until the server
>> sends the event. How you want to implement this sort of actuality
>> masquerading depends on your theme. For instance, naval battles
>> circa 1800 would work well, because the ships move slowly and are
>> difficult to turn, thus lagging connections and even dropped
>> packets would have little effect on how the game looks to the
>> player. The only difference might be that two colliding ships seem
>> to merge together for a moment until the actual collision message
>> is received, at which point the cracking timbers sound begins and
>> whatever animations are going to take place actually occur.
There's a careful deliniation between event classes here. Some events
are to be calculated (and acted upon) simultaneously by both the
client and the server (ie the only saving is in net bandwidth), and
some events are *ONLY* to be rendered on the server, and then dictated
to the client.
Who makes that distinction?
Think of this from a user-programmer perspective (or even an
admin-programmer perspective). As a programmer you know have to
consider every non-const event (ie all events which modify objects)
and type or classify them as either being client-possible or
server-only.
Not fun. Very prone to errors. HUGE security nightmare waiting to
happen.
> I don't know a lot about how the graphical muds work, but here's
> some initial thoughts I've been sort of synergisting from a couple
> of threads:
First thoughts here (nothing new):
There are two basic reasons to use a client (plus their permutations):
1) Conserve server CPU
2) Conserve net bandwidth
3) Local customisation (presentation)
#3 is ignorable as it all can be done on the server (cf CURSES, remote
X applications (yeah, make the client an X interface on the server and
then remote that -- wanna talk lag?) etc).
Server CPU is not and should not be a challenged resource. CPU is
cheap. Conversely, bandwidth is not cheap, and in several cases can
be conserved by offloading CPU work from the server to the client (eg
rendering of current view in a graphical MUD).
> Basically, considering long ago threads talking about virtual room
> generation, combined with the thread talking about creating object
> instantiations at interaction time, plus most everyone's "large
> world" concept, and finally assuming a perfect world fully encrypted
> ( and unhackable, yeah, right )
Note: There are a large number of encryption/compression (they're
almost synonymous in many ways) protocols which share the
characteristic of being expensive to encrypt, but being extremely
cheap to decrypt. Arithmetic compression is a case in point.
Outside a simple, bugger forget the term, XOR where the XOR'ed against
value comes from a shared and seperate source (eg the contents of a
file, or bytes on a CD (music CD's are incredibly popular for this
in the heavy encryption market as as long as the choice of CD is kept
secret, the cipher is effectively guaranteed secure as the key is
indistinguishable from random)) would work well with negligable overhead.
> I was thinking this:
> Client: Handles _all_ interaction with the world, based on it's most
> current sync with the primary db at the server, and updates the
> server with information changes.
A problem which I've been examining of late is coordination between
the server and client. Unlike the SNA world we can't guarantee that
either end is actually there at any instant. The best we can ever say
is that the other end was alive as of the last packet.
That __may__ be good enough. I'm not sure.
Given a client which is doing predictive work on the scene it
represents to the user, and a server (push or pull) which updates the
client with the requisite data for verisimilitude, there is a problem:
> sc
You are near to death.
> l
Bubba is here.
> kill bubba
You attack Bubba. // At this instant Bubba's client dissappears.
Bubba pushes you with his shield. // Predicted first act of Bubba.
You are dead. // Result of predicted action.
Bubba suddenly vanishes! He has logged out.
You are alive.
This is of course a variation on the colliding space ships, and
suffers from the "who owns the decision" problem described above. The
key point however is that prediction only goes so far.
I suspec that it is safe for the client to ONLY predict motion, not
the results of motion, or any side effects (eg Bubba walks the the
waterfalla and the water splashes on him). The client predicts
already started motions only. All actual decision making is left up
to the server.
If a leaf is falling from a tree, the client knows its position and
velocity, and can thus (largely accurately) update the user's screen
with this moving leaf without any other updates from the server for
the duration of its fall. The server would define a start and end
point for the leaf's motions along with the velocity (start at
position X, move at velocity Y, accelleration Z, and stop or request
update when at position Q).
If Bubba happens to teleport in and grab the leaf shortly after the
client takes over showing the leaf falling without server updates,
then the server would do update the client. The result (give big
latencies) is that the user would see the leaf falling, possibly all
the way to the ground, Bubba suddenly appearing and grabbing the leaf
in mid air.
I suspect that the lack of sync and logical consistency in the leaf
position will be willingly overlooked by users.
> Server: maintains the primary db, posting changes sent from the
> clients, and routing update messages to those clients that are
> currently "live" with any data that has been changed by another
> client.
Yup. This posits that the server is responsible for recording client
state and thus maintaining sync. More expensive, but lighter weight
has the client responsible for requesting updates/refreshes for the
data it know its wants, and the server responsible only for
interjecting new objects that are within the client's view (client is
then responsible for requesting updates).
I don't like either model. Something in the middle where the server
broadcasts update tags ala
you-might-want-to-know-about-object-at-lcoation-X to all relevant
clients, and the clients selectively ack with YES-TELL-ME-ABOUT-IT
seems better.
> Basically, I'm going under the assumption ( in flying, we call it
> the Big Sky Theory ) that a large portion of your world isn't in use
> at any one time, and those portions that are in use aren't being
> used by very many people. At any one time, realistically, someone at
> the client level isn't using much of the total world db, and could
> d/l updates as things change. You could even release your primary
> world information ( that's static ) on CD, and hold update files on
> the player's hard drive as things change.
Bingo.
> When I interact with the world, I change things that eventually need
> to be posted to the db. The client handles all of this, then sends
> an update packet to the server. The server tracks who needs
> immediate notice of any db sections, and sends those update packets
> immediately to those clients that are holding any recently updated
> db "sections" live on the client side. You would definately need to
> keep the client/server synced tightly, but not too tightly, I think,
> just enough so that several updates aren't missed, causing the
> client's world to deviate too far from the primary db.
The advantage of keeping to the client-request only model is that it
allows clients on slower connections to selectively disregard certain
updates in an effort to conserve bandwidth (consider the case of a
room containing 5,000 MUDders all engaged in frantic MUDding activity
-- you don't WANT all that data -- just what's relevant to your
interest or focal point).
cf LambdaMOO's lounge.
> Since the
> client handles most of the work, instead of having to send
> _everything_ to the server for processing, we get the effect of
> distributed processing, with some additional overhead from db
> updates, but not nearly as much net traffic as a "normal", dumb
> terminal client would have.
Nope. Its more expensive than a standard telnet connection to a text
MUD (protocol overhead, ACK/NAK, db updates, sync traffic etc), but
its cheaper than attempting to run a remote X app.
> Depending on the granularity of your world, I wouldn't need to
> update the primary db with "small" changes very regularly. If
> someone inscribes a hidden message on a room wall, that wouldn't be
> a high priority update...you would just assume ( as the
> player/client side ) that you had missed the ( new ) message on
> previous inspections. The primary db wouldn't need to know that
> player X had pushed a particular table an inch and a half to the
> left right away, unless/until the movement of the table implied some
> more intricate world behavior ( like blocking a door ).
<nod>
Programmatically determining what is and is not a significant state
change could be interesting however.
--
J C Lawrence Internet: claw@null.net
(Contractor) Internet: coder@ibm.net
---------(*) Internet: claw@under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...