>>> "F. Randall Farmer" wrote
>
> Hey! Something I know something about! :-) I'm going to be terse in my reply,
> based on the senior level of postings to this list. If I'm being TOO terse
> for this audience, just let me know and I'll explain in more detail.
I, on the other hand, tend to think out loud - so apologies in advance for
a long email... ;)
> >Phil's comment on that was that it's impossible to construct a decent fight
> >system, for instance, under that setup, because you can never guarantee that
> >an object has actually decremented it's hit points when it's been asked to.
>
> That depends on what you mean by "guarantee." How about a warrantee?
Yeah, warrantee is probably the better word.
>
> What you need is a contract. :-)
*nod* That's what it's boiling down to. I'll note now, we're not actually
implementing remote objects at the moment - all data is currently stored on
the server - but the topic is of some interest. In particular:
The game world is open source - all of it - and players have the ability to
write code that executes at the same level as, for example, the look() or
move() methods. My original plan, as I said, was to trust all objects, and
grant the ability to tinker with an object only to 'trusted' people - the
traditional 'wizard' concept. But that's limiting. This concept of
'contracts' seems to make sense - the object has to enter into a contract
to be allowed to play with other objects, as part of that contract, they'll
promise not to tinker with their own stats in bad ways, and not to lie
about their stats to others (unless they're supposed to ;). In some cases,
the object may not even be allowed to know it's own internal state.
The only way I can see to enforce that if the players can write their own
methods is to take permissions for the stats in question off them - still
makes sense to store the stats in the object, I think, as it _is_
object-specific data, but the object's other methods may not have
permission to see/modify parts of itself.
At the end of the day, the mediator provides methods which maintain the things
the mediator is concerned about. Whether we disable access to the stats,
but allow anything else, or we disable the ability to write methods
full-stop and allow only the mediator-provided ones is a question for the
server admin, I suspect.
> The damageable object provides a contract that it will apply damage
> appropriately according to some "convention" agreed to in the contract.
>
> There are two interesting forms of warrantee, either context-verified
> "Aren't you supposed to be dead? Get lost!" or third party
> "inspection" warrantee (signed component.)
>
> I like the latter for distributed objects. You
> just include signed code for "AD&D addition 4 combat library, v3.05 signed
> by EGG Systems, Ltd" into your object and then you can use it on
> battle-server contexts accepting those certificates.
Ok, the problem with this is that everything's open source, and built on a
platform that prides itself in it's dynamism. In particular, it would be
possible for someone to subvert the basic python libraries with relative
ease - the code itself is still the same signed code, still trusted, but
the platform that it's running on is not. This seems to apply at any level
you care to point at - the level below is subvertable, and not
signed/signable. I suspect everyone has that problem, whether they
acknowledge it/consider it an issue or not.
So signed components don't really work, unless they're on the server. We
could list allowed methods/mechanics in our 'contract' on the server, and
allow people to choose between different mechanics - I might want to build
a char that gains x as it looses y, someone else might want to gain z as
they loose y, so the 'decrement y' method can be different, but still one
of the accepted ones. Hrm - potential for building races from scratch,
interesting idea.
Anyway, context-verified seems to make sense, but then the only way to make
sure that the object is valid within the context is to store what 'valid'
is - at which point, we've essentially migrated the data to the server in a
distributed world, or removed the object's ability to tinker with it's own
data in the server-centric, coding-enabled world.
So, signed would mean the object retains permissions over it's own stats,
but can only run certain methods (and no others, that's important). That
also doesn't distribute well, as you can't sign down to the CPU - may not
be an issue in most cases, but there's always smart alecs out there.
Context-verified would mean we essentially store copies of what the data
should be (or at least, checkpoints of some sort for "important" stuff) on
the server, and duplicate any crucial calculations - I have a gut feeling
that degenerates quickly into doing everything on the server.
> OBVIOUS STATEMENT AHEAD:
> In a system of interacting distributed objects, each object is trusted
> to hold only the state that the object can be trusted with. (Huh?)
As obvious as it is, that's the crux of the matter - for a distributed
object system. For a system where players can potentially write their own
methods, things are more complex - the concept remains the same, but the
implementation is, um, interesting ;)
I think our mediators are fast approaching being the place that games
mechanics get implemented. Mediators are the physical manifestation of a
contract - they enforce the contract. Hrm - they could provide
'sanctioned' methods for certain things, that get bound to the objects
when they enter into the contract... *ponder*
My concern is they end up having all the 'logic' for the objects - which is
wrong and bad. Trying to trust the objects to know about themselves,
whilst not allowing them to futz things up badly, seems to be a really
nasty balancing act...
> Last year Communities.com built and alpha tested a 100% distributed
> 3D graphical object system which had signed resources (media, behavior)
> where each participant used "Agency" software which was both client
> and server at the same time. The avatar for the user was locally
> hosted and you (via URL) could visit another persons locally hosted
> "turf". Objects carried had their resources validated upon arrival
> according to certification requirements set by the turf owner/host.
> Though it was not combat oriented, that test demonstrated that such
> a system is workable and has the desired properties.
Ah. But can you guarantee on an object that it's behaviour is the expected
behaviour?
Taking EverQuest's recent sniffer as an example - it seems they fed a list
of all players in the vicinity to the player, and let the player's client
drop the invisible ones. Can you trust that the player's client (in
essence, the player object) is playing by the rules? I don't think so,
unless you go the route of signing, and a short lifespan for the clients to
foil any hackers out there.
In our case, we're laying the guts of the object open for the hackers, on
purpose - so removing those aspects that they shouldn't be able to tinker
with is the only safe way, I think, both in terms of distributed objects,
and in terms of a central server holding all data and objects, where
player-provided methods are run.
> There were some problems with the prototype, though. The nastiest
> one was connections: You've got 12 people all "hosting" their own
> objects. That means 12 connections for each person, one each to the
> other folks. We worked out an alternative approach, but you can see
> what taking a naive approach can lead to. :-)
Heh.
>
> Ask yourself this question: What happens if I leave a VASE that I
> host at your TURF, which you host. Then I take my machine off-line.
> [This is a puzzle for the readership. There are several answers.]
Options:
1) vase vanishes - ouch
2) vase migrates when it's first taken to remote turf, and stays there when
I disappear - persistant objects in a non-persistant universe. I like this
idea, but it really only delays the problem - and allows for a, um,
interesting DoS attack ;)
3) vase become 'inviolate' (ala linkdead players in traditional muds) -
breaks the illusion, tho, and suffers 2's problems (garbage collection?)
Regardless, the essential problem is one of ownership (who continues to
have what rights over the object) and function (does the object then
function on the turf's server, or does it cease functioning altogether?)
Does that cover it, or is there something else there?
> >One alternative would be to enforce the keys thing above by migrating
> >object data to the server where the object relinquishes control over it - so
> >in the degenerate case, all objects carry their own data, but when they
> >choose to participate in a context, they relinquish control of their data
> >not only by handing the keys over, but by handing the data over as well.
>
> YUCK! Don't go there! :-)
Why not? What if we introduced a namespace concept - my strength becomes
moebius/base_server/strength on my server, and moebius/communities/strength
on yours? There's no requirement that servers trust each other, so there's
no requirement this avatar share any attributes at all between servers -
except identity: we open the door to potentially recognising the same
player at a different mud. *shrug* Usefulness? near-zip. Cuteness?
pretty high ;) (Thinks: You could store a signature instead of all the
player data, and store player data client-side - again, usefulness
near-zero, but an interesting thought to tinker with)
>
> Where you are using "keys", most people I know use the word "capabilities." :-)
Yeah, I know - concepts I grasp, but the words just don't sink in :(
Truth, I'm still plugging through the erights web site - there's some
interesting stuff there, I'm not sure it's all directly applicable, but
it's a hell of a run-down on the issues re: object relationships.
KevinL