On Mon, 14 Feb 2000 21:36:32 -0800 (PST)
Brandon J Rickman <dr.k@pc4.zennet.com> wrote:
> I've always been bothered by the "never trust the client" law. It
> is misleading: it says you can't trust an _individual_ client, but
> people interpret this as "you can't trust clients in general".
> Shouldn't one be able to devise a robust distributed network
> across several clients, such that errors (or cheats) on one client
> will be detected when compared against other client results?
<nod>
I'd classify the problem in two ways:
Protocol weaknesses
Server trust of data from client
The two are closely related, but yet distinct.
In ShowEQ's case the EQ server provided significantly more data to
the client than the either needed or was capable of displaying.
ShowEQ took advantage of this by displaying much more to the user
than that player was normally capable of seeing.
UO demonstrated numerous cases of both breaches, in not trapping or
reacting against violations of the extant protocol definition (the
rule of "Rigorously adhere to standards in what you produce and be
generous in waht you accept" doesn't apply in this case), and in
blindly accepting and trusting variously wrong values from the
client.
To get it right requires a peculiarly paranoid way of thinking.
The problem is that the problem is subtle. The reason you go
client/server is twofold (outside of intelligent display clients):
1) To conserve client/server bandwidth (often the data for a
compute is smaller than the results of the compute).
2) To take advantage of client side compute capabilities and thus
save server resources
The first goal is usually fairly easy to accomplish and usually
doesn't provide cannonical security holes until you get into the
realms of attempting to predict traffic in the case of packet loss:
Bubba's client ceases sending data presumably suffering from
packet loss.
Bubba was last moving in Y direction at velocity Z
By prediction at time T, Bubba is at postion Q.
Boffo at position V, shoots position Q at time T (thus hitting the
predicted Bubba).
Bubba now sends traffic revealing that in fact it is now at
position U instead, and that it has shot position V (where Boffo
is known to actually be).
Note first: The above example doesn't state the Bubba's client is
compomised or doing anything untoward. The exact same scenario
could occur with a compromised client and thru normal game play.
This is what makes solving the problem so difficult: Good Guys and
Bad Guys do exactly the same things, only the intentions are
different.
There are two holes in this above:
a) The server accepts data from the client.
To cover up network problems the server manufactures events based
on data it doesn't have __and__ allows the client to correct the
server when communications are resumed. This allows a compromised
client to constructively manipulate the data it does send the server
to take maximal advantage of the arbitrage between what the server
predicts and what the client can correct in the server's knowledge.
The problem here is that either obvious solution (allowing the
correction or disallowing the correction) can create unpleasant and
reality-breaking effects for "normal" players. If the above occured
because of normal network lag and the server accepted the
correction, then Boffo sees his shell hit Bubba, and then Bubba
suddenly junp away unharmed. If the server doesn't accept the
correction, Bubba see's himself somewhere else, suddenly to jump to
another location only to be instantly shot by Boffo. Either way one
end is going to be unhappy. The fact that Bubba could have
manufactured the scenario artificially via a compromised client (and
thus ensuring that his shot in the last step hists Boffo) only makes
it worse.
It gets worse however. If you decide to allow the correction,
then you instantly open up the second major hole:
b) The client/server protocol is broken. In the above the server
provides data to Bubba's client about the shot and his predicted
position _prior_ to Bubba explicitly stating his position. This
allows Bubba's client to manipulate the gap between what the server
tells it, and what the server will allow as corrections. Voila!
Instant exploit.
Its not quite that simple however. If instead the server mandated
that Bubba tell the server his position before revealing the fact
and presence of the shot, other exploits are possible.
eg: Bubba's client observes the demand, realises that a critical
event has occured, and therefore manufactures and artificial and
"wrong" data set that is "most likely" to aid Bubba's survival ("Oh
yeah, he just jumped back into the foxhole").
Or, what if Bubba's client simply never replies to the demand
(possibly caused by really bad packet loss)? Is that a hacked
client, or a bad network? Does the server simply guess, time-out,
or pick and answer and enforce it?
Which ever position you take, any one of them, there is a way to
exploit the results.
In #2 (saving server side compute), it gets even worse. If yuo
can't trust the data from the client you are forced to one of two
positions:
a) Sanity checking all (or most) of the data from the client to
try and detect exploits. This of course wastes the server-side CPU
that you are trying to save, and then only ensures that your
compromised clients don't cheat (in the ways you check) by more than
than your sanity check tolerances.
b) Only accepting trivial and non-significant data from the
client. The problem is that trivial data computation usually
doesn't save you that much CPU, and worse, what defines data as
trivial is very likely to vary widely during a game worlds
development. An item that is "trivial" with today's game world
definition may well be critical with the changes made tomorrow.
This leaves you in a race against your players to keep your
"trivial" definitions up to date faster than their detections.
To get it right requires a peculiarly paranoid way of thinking. A
really sick puppy. There's a reason I don't do much security work
any more.
Underneath all this is a more insidious assumption:
The player is physically capable of reverse engineering your
protocols, data structures, memory images, and algorithms, and is
capable of manipulating them to his own advantage.
In an Open Source world this is by definition true (given a capable
user). A common argument is that is false in a Closed Source
environment. Especially in a commercial setting I really doubt this
is true. Consider:
While I'm sure we'd all like to think that the guys at UO, AC, and
EQ are terribly chummy and wish each other the very best, corporate
espionage, marketing pressures, and rivaly for the same player bases
do exist. As good, and well intentioned as the programmers and
designers at any one of those companies are, there is is little to
say that the following couldn't happen:
Company X, acting thru a proxy and thus hiding its identity, pays
a contractor to reverse engineer company Y's client using debuggers,
circuit emulators, etc. Company X then releases a number of
exploits based on the data gained. The cost to X is perhaps a few
hundred thousand in contract and equipment fees for someone
competent. Legally the reverse engineering is above board
("competitive analysis").
The really pleasnat part about this is that it doesn't *require* a
rich company to pay for access to expensive equipment and debuggers.
I have all that stuff right here right now from the various clients
I've worked for. I'm sure Lambert and the other contractors and
professional programmers on the list do as well. Some copy
protection cracks required significant investments from the crackers
-- but that didn't prevent the cracks from being made.
"Oh, but we'll us NT or some other OS and use encryption and nifty
stuff and do everything at proper security levels!" Which will work
just fine until I run it under one of the readily available
instrumented kernels from MS with an in-circuit debugger. At that
point you just as secure as DeCSS was...and I presume you all
familiar with how easilly that was cracked.
But but but we'll use public key cryptography and digital signatures
and CRC's and sanitcy checking and spot checks and someone will
crack that just as easily as DeCSS with a datascope and then plug
their bum data into your pristine and correct encryptions and
signature functions without you ever being the wiser.
In the end, you just can't trust anything sotred on, reported from,
not reported from, in in any way passing thru or cirectly related to
a client.
So there go your gorgeous plans of saving bandwidth and CPU down the
crapper.
Sorry.
So, in the grand tradition of programmers and security experts in
the face of market pressures everywhere, you compromise. You try
and figure out a way that you can get your usability gains and not
open up too many holes, not get hurt too badly, and not lose too
many nights sleep doing rush rush hill ten fixes to a hole you knew
was there when you started out but that you now have to fix without
breaking backwards compatability.
Not that I have any familiarty with this stuff of course.
So, you're going to compromise. Start out with that awareness.
Realise it from the start and then make sure that you know exactly
how you are compromising and exactly what that might expose -- and
then realise that you will ALWAYS be exposing more than you think
because of interrelations and dependencies that you didn't think of.
So, you're going to compromise security for usability, and you're
going to pray that you get it right.
The first thing to do is to define very very well what you mean by
"trust". What exactly is your "trust model"? What determines for
you what is trustworthy and what is not.
Next determine exactly what you are going to do when you detect
violations of your trust model. You get obviously bum data from the
client. You get port scanned. You detect a smurf attack. You
detect contrived network lag. Whatever. What are yuo going to do?
What exposures and extentions does your reaction have on your trust
model? There will be some. What are they?
Okay. Now that you have the extentions and changes, go back and
redefine your trust model with those in place and repeat. Keep
repeating until the system doesn't change any more. If it changes
too much, your basic trust model is broken, most likely in the base
assumptions it is built on.
Okay, so you have a trust model. How do you adapt your server model
to that trust model? Does it require changes? (if so repeat from
the beginning)
Now apply your game world to your trust model.
Now apply your client/server model to your trust model.
etc.
It really isn't fun. And you pray a lot.
If you're into these sorts of things I strongly suggest that you
read Bruce Schneier's books and essaye:
http://www.counterpane.com/
and subscribe to the Crypto-Gram Newsletter at:
http://www.counterpane.com/crypto-gram.html
A good starting read to get you thinking in the right groove is
Bruce's ever well educated rant on PKI (public-key infrastructure):
http://www.counterpane.com/pki-risks.html
> In other words, ask the client run by player A to compute the
> results of one round of combat. But also compute that same round
> of combat on two (or more) other clients. So you've got a
> redundant system.
Until a significant enough percentage of your players are playing
with compromised clients. Or, to translate, all this really does is
make effectively compromising the client more difficult as you now
have to compromise some N clients rather than just the one.
Not impossible, just more difficult.
> If player A's version is wildly different from the other two,
> assume that she has a hacked client and dump her from the game.
> Similarly, if player A's calculation agrees with player B's
> calculation but not player C, then C is untrustworthy.
Cool. So I can now use hacked clients to creatively evict players I
don't like from the game once I get enough of them in use. Its easy
enough to add interclient detection methods so hacked clients can
detect other hacked clients without the server's knowledge.
> Of course now the server will have to send out more data for each
> transaction, and make all the decisions about which clients to
> trust, in addition to maintaining the game world.
Bingo.
> So if someone does hack a client, they have to know who the
> redundant clients are _for that particular calculation_, and hack
> those clients as well. The odds are against the same version of a
> hacked client being used by all the redundant clients.
Not if the client is self-upgrading -- something that is pretty
trivial to do in these days of shared libraries.
> If you had such a system working, you could then gradually grant
> more trust to particularly trustworthy clients. Not only is the
> client holding the trusted data for the character, it is trusted
> with the player's weapon, armor, inventory, &c. Lest a cleverly
> hacked client try to take advantage of this, you still perform
> spot checks on those objects. When the client is ready to log
> out, it sends all the trusted data back to the sever, but if there
> is anything fishy in the data then the server will dump that data
> and revert to the redundant data.
"But I played all weekend and did really wall and got tons of great
stuff but I lost it all when I tried to log out! And I was using
the client straight off the company CD too!"
Of course he doesn't know that my hacked client picked up his IP
from <whatever> and then exploited his client to ensure that his
gains were lost.
Yuo get to a point where you can't trust the server either, and that
point is not so far away.
--
J C Lawrence Home: claw@kanga.nu
----------(*) Other: coder@kanga.nu
--=| A man is as sane as he is dangerous to his environment |=--