URL:
http://www.cris.com/%7eswoodcoc/gamedev1.thread.html
Hello All: 9/25/97 For those who don't subscribe, there's a fairly
good mailing list aimed at game developers called, not surprisingly,
the Game Design list.
Around the middle of 1997 Steve Schonberger hopped onto the list,
mentioned that he'd been to our CGDC AI Roundtables, and asked the
group in general if they thought it was a good idea for a game's AI to
"learn" and adapt itself to a player. This kicked off a very
interesting discussion which is presented here.
The posts from this thread are presented essentially as is, with some
*minor* editing on my part for formatting. I left the headers and most
.sigs intact. Alert readers will note that some of the info here also
made its way to the Current and Upcoming Games with Interesting AI
page, since there was quite a bit of discussion of the learning AI
built into Age of Empires on the part of the designers and
playtesters.
Here are the e-mail addresses for those contributors as best I could
pluck them out. My profound apologies if I missed anybody; please let
me know and I'll correct this list forthwith:
Mark Atkinson
Robert Blum
Rick Cronan
Ryan T. Drake
Eric Dybsand
John Judd
Orlando Llanes
Paul Nash
David Pottinger
Steve Schonberger
Nick Shaffner
John Vanderbeck
Steven Woodcock
Hui Ka Yu
Enjoy!
Steven
From stevesch@csealumni.UNL.edu Fri Jul 11 16:43:09 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id QAA26414; Fri, 11 Jul 1997 16:43:08 -0400
Received: from smtp.gte.net by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id QAA21718; Fri, 11 Jul 1997 16:43:07 -0400
Received: from gateway-pp200 (1Cust38.Max39.Seattle.WA.MS.UU.NET [153.34.126.166])
by smtp.gte.net (SMI-8.6/SMI-SVR4) with ESMTP id PAA11409
for ; Fri, 11 Jul 1997 15:43:09 -0500 (CDT)
Message-Id: <199707112043.PAA11409@smtp.gte.net>
From: "Steve Schonberger"
To:
Subject: Re: Game AI
Date: Fri, 11 Jul 1997 13:41:40 -0700
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1161
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Length: 2777
Status: RO
At the Computer Game Developers' Convention, I attended a game AI
round-table (one of Steven Woodcock's sessions). An interesting topic
that came up was whether it is a good idea for game AI to learn while
playing a single human player. It seemed to me that there were more people
advocating non-learning AI models, but a few people did think learning
was a good thing.
Note: By "non-learning", I don't mean that the computer opponent
wasn't showing the appearance of getting smarter as it played the game. I
just mean that if it did get smarter, it did so by adaptively selecting
stronger AI settings, rather than actually evaluating the results it got and
adjusting its AI settings based on its evaluation.
Since this list has been pretty quiet lately, and overall pretty quiet
when things we've agreed are off topic are counted out, I figured it
deserved a new, on-topic subject, now that the AI scripting topic seems to be
dying down. There were plenty of other good AI topics in that round-table,
which I'll hold in reserve for now.
Steve Schonberger
From woodcock@real3d.com Fri Jul 11 16:17:37 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id QAA25524; Fri, 11 Jul 1997 16:17:19 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id QAA21596; Fri, 11 Jul 1997 16:17:17 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id QAA26308
for woodcock@real3d.com; Fri, 11 Jul 1997 16:16:23 -0400 (EDT)
Resent-Date: Fri, 11 Jul 1997 16:16:23 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: woodcock@real3d.com
Message-Id: <9707112016.AA02698@stargazer.real3d.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Fri, 11 Jul 1997 16:16:10 -0400 (EDT)
In-Reply-To: <199707111920.OAA20395@smtp.gte.net> from "Steve Schonberger" at Jul 11, 97 12:18:55 pm
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"-2SpWD.A.cXG.dRpxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/307
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2697
Status: RO
> At the Computer Game Developers' Convention, I attended a game AI
> round-table (one of Steven Woodcock's sessions).
Hi Steve! It was good to meet you there.
> An interesting topic that
> came up was whether it is a good idea for game AI to learn while playing a
> single human player. It seemed to me that there were more people
> advocating non-learning AI models, but a few people did think learning was
> a good thing.
That was on what, the first day?
I thought it was interesting how many developers did *not* think it was
a good idea, vs. those who thought it might be but just didn't have the time
to do it or who didn't think it was appropriate for their current
project. Many of the developers present (as you say) thought learning
and adjusting could be a good thing (and in fact it was a major topic
on a second day of discussions when we talked about A-life and the new
game Creatures). A few, however, thought it was not a good idea at any
time...that it would either make the game too difficult for the player or that
(in a multiplayer family) it would train against Player A but become
worse against Player B.
There were definitely some interesting points made on both sides.
> Since this list has been pretty quiet lately, and overall pretty quiet when
> things we've agreed are off topic are counted out, I figured it deserved a
> new, on-topic subject, now that the AI scripting topic seems to be dying
> down.
Yes....I had hoped it would run longer. ;(
> There were plenty of other good AI topics in that round-table, which
> I'll hold in reserve for now.
I'm glad you enjoyed them. We're trying to continue with them over
on the new Gamasutra site (www.gamasutra.com).
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From drake@cse.psu.edu Fri Jul 11 16:24:47 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id QAA25754; Fri, 11 Jul 1997 16:24:46 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id QAA21621; Fri, 11 Jul 1997 16:24:44 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id QAA27410
for woodcock@real3d.com; Fri, 11 Jul 1997 16:23:50 -0400 (EDT)
Resent-Date: Fri, 11 Jul 1997 16:23:50 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Date: Fri, 11 Jul 1997 16:23:35 -0400 (EDT)
From: Ryan T Drake
To: gamedesign@mail.digiweb.com
Subject: Re: Game AI
In-Reply-To: <199707111920.OAA20395@smtp.gte.net>
Message-ID:
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Resent-Message-ID: <"OZvfIC.A.VqG.aYpxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/308
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2938
Status: RO
On Fri, 11 Jul 1997, Steve Schonberger wrote:
> At the Computer Game Developers' Convention, I attended a game AI
> round-table (one of Steven Woodcock's sessions). An interesting topic that
> came up was whether it is a good idea for game AI to learn while playing a
> single human player. It seemed to me that there were more people
> advocating non-learning AI models, but a few people did think learning was
> a good thing.
I would say that adaptive AI would make a really good option in most
cases, but it also would depend on the game. Something else that would
need consideration: Does the game remember what it learned when you quit
the game and come back in?
My feeling is, you can tell what is good AI if the computer's actions
look like the way a human would play. For instance: If I am playing
Quake, and i see a monster trying to follow me around the corner and
hitting a wall, i would not consider that good AI. On the other hand, if
i had a monster after me that hits every shot and follows me perfectly, i
would also not consider that good AI. Reason being, I know a human cannot
possibly be dumb enough to try to follow me through walls, and I also know
a human cannot possibly be good enough to hit each shot and follow
perfectly. In essence, to have good artificial intellegence, you also
need artificial stupidity.
One idea of implementing this kind of AI is to start by giving the
computer only as much information as a human would get. For example,
sticking with the Quake idea. Only give the computer information about
its surroundings, and not give it access to where everything is on the
map. If a normal player would get sound cues give the computer ai sound
cues.
Another important thing to give any AI is reaction time. No human being
has perfect 0ms reaction time. In a ``deathmatch'' for instance, a human
being could take anywhere between 100-500ms to react to events on the
screen. Add this factor in when calculating how the computer reacts to
things. Reasonable AI can vary given a user-selected ``skill level'' but
it should always be beatable.
Now to your original point... Yes, I think ideally AI should be adaptive,
so long as its adaptiveness tops off at a certain point. Anyone who
considers themself very good at a certain game can relate to what I mean.
Eventually the learning curve just stops and you are at a point that
you can't really get much better...Usually this is when you can beat most
other players and there is no one better to challenge you. This should
also happen with AI. There should be a point where the AI stops getting
better. As soon as it looks like the AI is doing a certain amount better
than the human player, it should shut off and not learn anything new.
Then as the human player starts getting better you can turn the learning
on again.
= Ryan Drake = drake@cse.psu.edu =
http://www.cse.psu.edu/~drake
From woodcock@real3d.com Fri Jul 11 16:42:53 1997
Return-Path:
Received: from stargazer.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id QAA26399; Fri, 11 Jul 1997 16:42:52 -0400
Received: by stargazer.real3d.com (4.1/1.34.a)
id AA02715; Fri, 11 Jul 97 16:42:51 EDT
From: woodcock@real3d.com
Message-Id: <9707112042.AA02715@stargazer.real3d.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Fri, 11 Jul 1997 16:42:51 -0400 (EDT)
In-Reply-To: from "Ryan T Drake" at Jul 11, 97 04:23:35 pm
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 2958
Status: RO
> I would say that adaptive AI would make a really good option in most
> cases, but it also would depend on the game. Something else that would
> need consideration: Does the game remember what it learned when you quit
> the game and come back in?
If I built it it would. I'd also give the user a chance to either
play with the "adapted" or "evolved" AI vs. the "out of the box" setting.
>
> (discussion of the need for "artificial stupidity" deleted)
>
Those are all good points Ryan. One reason the Quake-bots are as good
as they are is that they can in fact react FAR faster than any human
can. A good design should always allow a player to select the AI
difficulty (much like Enemy Nations does and which (sadly) Dungeon
Keeper does NOT), which would of course mean different things for different
games. For a game like Quake or Doom reaction times are definitely
part of it.
> Now to your original point... Yes, I think ideally AI should be adaptive,
> so long as its adaptiveness tops off at a certain point. Anyone who
> considers themself very good at a certain game can relate to what I mean.
> Eventually the learning curve just stops and you are at a point that
> you can't really get much better...Usually this is when you can beat most
> other players and there is no one better to challenge you. This should
> also happen with AI. There should be a point where the AI stops getting
> better. As soon as it looks like the AI is doing a certain amount better
> than the human player, it should shut off and not learn anything new.
> Then as the human player starts getting better you can turn the learning
> on again.
I'm curious how you might spot that the human player has "leveled out"?
Taking Quake as an example, what criteria might you use? Kills vs.
shots? Average time playing a level? Some combination of the two might
suffice....that's an interesting problem.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/software.html (AI Software page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From stevesch@csealumni.UNL.edu Fri Jul 11 16:43:09 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id QAA26414; Fri, 11 Jul 1997 16:43:08 -0400
Received: from smtp.gte.net by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id QAA21718; Fri, 11 Jul 1997 16:43:07 -0400
Received: from gateway-pp200 (1Cust38.Max39.Seattle.WA.MS.UU.NET [153.34.126.166])
by smtp.gte.net (SMI-8.6/SMI-SVR4) with ESMTP id PAA11409
for ; Fri, 11 Jul 1997 15:43:09 -0500 (CDT)
Message-Id: <199707112043.PAA11409@smtp.gte.net>
From: "Steve Schonberger"
To:
Subject: Re: Game AI
Date: Fri, 11 Jul 1997 13:41:40 -0700
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1161
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Length: 2777
Status: RO
> > At the Computer Game Developers' Convention, I attended a game AI
> > round-table (one of Steven Woodcock's sessions).
>
> Hi Steve! It was good to meet you there.
Yeah, that's part of the fun of conferences.
> > An interesting topic that
> > came up was whether it is a good idea for game AI to learn while
playing a
> > single human player. It seemed to me that there were more people
> > advocating non-learning AI models, but a few people did think learning
was
> > a good thing.
>
> That was on what, the first day?
I'm not sure. It was the day when you and the other AI session leaders
talked about how one of the sessions had all of the women (one!)
> I thought it was interesting how many developers did *not* think it was
> a good idea, vs. those who thought it might be but just didn't have the
time
> to do it or who didn't think it was appropriate for their current
> project. Many of the developers present (as you say) thought learning
> and adjusting could be a good thing (and in fact it was a major topic
> on a second day of discussions when we talked about A-life and the new
> game Creatures). A few, however, thought it was not a good idea at any
> time...that it would either make the game too difficult for the player or
that
> (in a multiplayer family) it would train against Player A but become
> worse against Player B.
>
> There were definitely some interesting points made on both sides.
If I think of more to say, I'll say them on the list. Might as well see
what the rest of the list has to say about it.
> > Since this list has been pretty quiet lately, and overall pretty quiet
when
> > things we've agreed are off topic are counted out, I figured it
deserved a
> > new, on-topic subject, now that the AI scripting topic seems to be
dying
> > down.
>
> Yes....I had hoped it would run longer. ;(
But it didn't seem to be going much of anywhere. It seemed more like a
discussion of "what programming language do you script in?" than "what do
you do with the scripts?". The latter is more interesting.
> > There were plenty of other good AI topics in that round-table, which
> > I'll hold in reserve for now.
>
> I'm glad you enjoyed them. We're trying to continue with them over
> on the new Gamasutra site (www.gamasutra.com).
I suppose most of the credit for the session being interesting goes to the
other people attending, but without moderators there wouldn't be any
session at all! Also, I suppose you would have kicked things in the right
direction if there hadn't been lots of people speaking up on their own. A
lot of the session went over my head, being on the fringes of
research-grade AI, but what I understood was sure interesting.
Steve Schonberger
From DPottinger@Ensemble-Studios.com Fri Jul 11 17:01:41 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id RAA27151; Fri, 11 Jul 1997 17:01:41 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id RAA21837; Fri, 11 Jul 1997 17:01:37 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id RAA02992
for woodcock@real3d.com; Fri, 11 Jul 1997 17:00:43 -0400 (EDT)
Resent-Date: Fri, 11 Jul 1997 17:00:43 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <802B50C269DECF11B6A200A0242979EF33CFA7@consulting.ensemble.net>
From: David Pottinger
To: "'gamedesign@mail.digiweb.com'"
Subject: RE: Game AI
Date: Fri, 11 Jul 1997 16:02:14 -0500
X-Priority: 3
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.0.1457.3)
Content-Type: text/plain
Resent-Message-ID: <"xajq3.A.Po.Y6pxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/310
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3478
Status: RO
I guess I'd chime in with the thought that, if applied correctly, a
learning AI is a great thing.
I love cool AI, and you can't get much cooler than an AI that learns how
to beat you by watching you and other people play the game. That's an
AI I'd love to develop and love to play against. Playing humans (if
they're any good:) constantly forces you to either refine/optimize or
reevaluate/replace your strategies. If you always win easily, human
nature will tend to make you want to play something else. A good
learning AI helps fill that void when you aren't playing against other
humans.
You do have to consider the game (as Ryan Drake already mentioned). No
AI should ever make the game unfun for people to play. Does that mean
that you shouldn't do learning AI then? No way! There is a lot of room
for learning AIs to be applied in ways that still make games fun.
For Age of Empires (Shameless plug: The recent winner of best realtime
strat at E3:), we did do limited learning to augment a pretty
straightforward approach to the AI. When you play any of our campaign
scenarios the first time, the game is even. However, as a human, you
carry over information about the scenario when you replay it the next
time. So, we let the CPs do the same thing. They remember where you
attacked them or they attacked you, etc. We also let the CPs remember
your general playing tendencies so that they can improve playing against
you in the randomly generated games. This has helped the quality of the
AI out a lot. Well enough, in fact, that we'll be able to ship the game
w/ an AI that doesn't cheat. Though, we may do a "Doom-style Nightmare
mode" where the AI overtly cheats (by way of getting a resource boost at
the start) just to pound on people who like that kind of thing.
I guess I'd have to say that I've yet to see an AI that can't be beat by
some strategy that the developers either didn't foresee or didn't have
the time to code against. Learning is a great way to help alleviate
that problem and thus create a better playing experience.
dave
Dave C. Pottinger
Engine Lead and AI Guy
Ensemble Studios, Inc.
> -----Original Message-----
> From: Steve Schonberger [SMTP:stevesch@csealumni.UNL.edu]
> Sent: Friday, July 11, 1997 2:19 PM
> To: gamedesign@mail.digiweb.com
> Subject: Game AI
>
> At the Computer Game Developers' Convention, I attended a game AI
> round-table (one of Steven Woodcock's sessions). An interesting topic
> that
> came up was whether it is a good idea for game AI to learn while
> playing a
> single human player. It seemed to me that there were more people
> advocating non-learning AI models, but a few people did think learning
> was
> a good thing.
>
> Note: By "non-learning", I don't mean that the computer opponent
> wasn't
> showing the appearance of getting smarter as it played the game. I
> just
> mean that if it did get smarter, it did so by adaptively selecting
> stronger
> AI settings, rather than actually evaluating the results it got and
> adjusting its AI settings based on its evaluation.
>
> Since this list has been pretty quiet lately, and overall pretty quiet
> when
> things we've agreed are off topic are counted out, I figured it
> deserved a
> new, on-topic subject, now that the AI scripting topic seems to be
> dying
> down. There were plenty of other good AI topics in that round-table,
> which
> I'll hold in reserve for now.
>
> Steve Schonberger
From t-pauln@microsoft.com Fri Jul 11 17:13:12 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id RAA27587; Fri, 11 Jul 1997 17:13:11 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id RAA21889; Fri, 11 Jul 1997 17:13:10 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id RAA05616
for woodcock@real3d.com; Fri, 11 Jul 1997 17:12:17 -0400 (EDT)
Resent-Date: Fri, 11 Jul 1997 17:12:17 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <4152F7B641AFCF11A49800805F680B3F3F0943@RED-36-MSG.dns.microsoft.com>
From: Paul Nash
To: "'gamedesign@mail.digiweb.com'"
Subject: RE: Game AI
Date: Fri, 11 Jul 1997 14:11:33 -0700
X-Priority: 3
X-Mailer: Internet Mail Service (5.0.1458.49)
Resent-Message-ID: <"rwreOC.A.2SB.gFqxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/311
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 4351
Status: RO
(Comments inline. Damn MS-mail screwed up reply format...)
-Paul R. Nash, Multimedia Developer At Large
Microsoft Multimedia Dev. Intern
http://www.uiuc.edu/ph/www/pr-nash/
> -----Original Message-----
> From: Ryan T Drake [SMTP:drake@cse.psu.edu]
> Sent: Friday, July 11, 1997 1:24 PM
> To: gamedesign@mail.digiweb.com
> Subject: Re: Game AI
>
> On Fri, 11 Jul 1997, Steve Schonberger wrote:
>
> > At the Computer Game Developers' Convention, I attended a game AI
> > round-table (one of Steven Woodcock's sessions). An interesting
> topic that
> > came up was whether it is a good idea for game AI to learn while
> playing a
> > single human player. It seemed to me that there were more people
> > advocating non-learning AI models, but a few people did think
> learning was
> > a good thing.
>
> I would say that adaptive AI would make a really good option in most
> cases, but it also would depend on the game. Something else that
> would
> need consideration: Does the game remember what it learned when you
> quit
> the game and come back in?
[Paul Nash]
Most definitely it should remember that. The problem of
training on player B and wiping out player A is not a problem at all,
either. Don't a lot of games have saved game information? Is it so
hard for the AI to save it's brain to disk? Not in a properly designed
AI, I say. For instance, genetic algortihms should allow the saving of
the genes to prescribe a persitent state, I would think. So save that
for each player or each saved game.
> perfectly. In essence, to have good artificial intellegence, you also
> need artificial stupidity.
>
[Paul Nash]
Yes, AI should definitely "make mistakes."
> One idea of implementing this kind of AI is to start by giving the
> computer only as much information as a human would get. For example,
> sticking with the Quake idea. Only give the computer information
> about
> its surroundings, and not give it access to where everything is on the
> map. If a normal player would get sound cues give the computer ai
> sound
> cues.
>
[Paul Nash]
...and then provide some sort of proficiency function for
understanding those sound cues.
> Now to your original point... Yes, I think ideally AI should be
> adaptive,
> so long as its adaptiveness tops off at a certain point. Anyone who
> considers themself very good at a certain game can relate to what I
> mean.
> Eventually the learning curve just stops and you are at a point that
> you can't really get much better...Usually this is when you can beat
> most
> other players and there is no one better to challenge you. This should
> also happen with AI. There should be a point where the AI stops
> getting
> better. As soon as it looks like the AI is doing a certain amount
> better
> than the human player, it should shut off and not learn anything new.
> Then as the human player starts getting better you can turn the
> learning
> on again.
>
[Paul Nash]
Sorry Ryan, but I 100% disagree. You've just shot down your own
argument here, because you say that AI should level off to match the
player, but you say that the player levels off because there's noone
left to challenge them. BUT, the AI IS the challenger. If the AI is
always challenging a person, then they won't necessarily level off,
right? I don't consider this to be like weightlifting, because it's the
human brain, which is an entirely different kind of muscle. :)
That said, the AI should definitely *track* the player, and not
necessarily always be better than the human. If the AI sense the human
is thoroughly getting his butt kicked regularly, the AI is too strong
and it should back off a little to let the human "catch up."
I think it's amazing how Mavis Beacon can tell that you're
mising certain fingers and can intuitively KNOW what you're doing wrong.
The thing there is that the program has so much understanding about the
act of typing that it can diagnose your problems. Some games are like
this: "you're need to buy more armor more often" or "explore in larger
parties" might be cures for certain problems in strategy games. If you
could design an AI that can detect specific gameplay deficiencies in the
human and somehow adapt to them, that would be very cool.
Of course, I am not suggesting that any of this is really easy,
but all of it has potential.
>
From nshaf@intur.net Fri Jul 11 17:31:38 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id RAA28041; Fri, 11 Jul 1997 17:31:37 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id RAA21933; Fri, 11 Jul 1997 17:31:36 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id RAA10630
for woodcock@real3d.com; Fri, 11 Jul 1997 17:30:43 -0400 (EDT)
Resent-Date: Fri, 11 Jul 1997 17:30:43 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <33C6A704.104D@intur.net>
Date: Fri, 11 Jul 1997 16:35:00 -0500
From: Nick Shaffner
Reply-To: nshaf@intur.net
Organization: DigiFX Interactive (
http://www.digifx.net)
X-Mailer: Mozilla 3.0Gold (Win95; I)
MIME-Version: 1.0
To: gamedesign@mail.digiweb.com
Subject: Re: Game AI
References: <802B50C269DECF11B6A200A0242979EF33CFA7@consulting.ensemble.net>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"b0FHtB.A.6eC.xWqxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/312
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 1317
Status: RO
Hey David, remember me? :)
> You do have to consider the game (as Ryan Drake already mentioned). No
> AI should ever make the game unfun for people to play. Does that mean
> that you shouldn't do learning AI then? No way! There is a lot of room
> for learning AIs to be applied in ways that still make games fun.
Agreed, in addition, one could design the AI with the primary purpose
of making the game fun, rather than victory. This certainly be a more
difficult task, but (for example) by assessing the users levels of
interaction with the game, the AI could forcibly create incidents to
break up 'slow' periods - or perhaps not attack the users when he's down
or running low on resources, etc...
> the time to code against. Learning is a great way to help alleviate
> that problem and thus create a better playing experience.
Agreed, so long as it is used properly - it could possibly be used to
help plug unforseen gaps in game mechanics (perhaps like the sandbag
thing in C&C) ... It can also help make the game more user extensible,
for example in Mission to Nexus Prime (plug, plug) - the user can
actually design/create completely new types of units, and learning is
essential in order to get the AI to the point where it can use them
effectively...
Nick Shaffner
http://www.digifx.net/
From drake@cse.psu.edu Fri Jul 11 17:48:42 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id RAA28424; Fri, 11 Jul 1997 17:48:41 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id RAA21992; Fri, 11 Jul 1997 17:48:40 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id RAA12833
for woodcock@real3d.com; Fri, 11 Jul 1997 17:47:47 -0400 (EDT)
Resent-Date: Fri, 11 Jul 1997 17:47:47 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Date: Fri, 11 Jul 1997 17:48:53 -0400 (EDT)
From: Ryan T Drake
To: gamedesign@mail.digiweb.com
Subject: Re: Game AI
In-Reply-To: <9707112042.AA02715@stargazer.real3d.com>
Message-ID:
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Resent-Message-ID: <"Hi7sC.A.bGD.Enqxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/313
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3932
Status: RO
On Fri, 11 Jul 1997 woodcock@real3d.com wrote:
> I'm curious how you might spot that the human player has "leveled out"?
> Taking Quake as an example, what criteria might you use? Kills vs.
> shots? Average time playing a level? Some combination of the two might
> suffice....that's an interesting problem.
There are certain things that make a Quake player (or any other game
player) ``good.'' I will break it down into 3 categories for now: skill,
tactics, and strategy. Although they may sonund the same they have very
different meanings when you look at them from an AI perspective...
SKILL
I consider ``skill'' to be mastery of a game's physical interaction. For
instance reaction time would be a skill. The ability to perform complex
maneuvers with the joystick or keyboard would also be a skill. Think
about Command and Conquer--the ability to keep two different sets of tanks
driving in two different directions (with keyboard shortcuts or by
flipping back and forth between the two groups) would also be a skill.
This is the easy part of programming AI. We start with the assertion that
a computer has 100% skill. Normally there is no reaction time and the
computer has as many fingers as it needs to control itself and its
character. By adding code to modify the computer's reaction time, or
limiting the number of things it can keep track of, you are lowering the
computer's SKILL to a human's level.
TACTICS
Tactics describe a player's specific reactions to the game environment and
to his or her opponents. In Command and Conquer, if my opponent starts
driving 20 tanks in the direction of my base, I will react by erecting a
few turrets and sending a force of my own out. Tactics vary considerably
from player to player. They can be thought of as a person's style of
gameplay. In Quake if I make a habit of going for health boxes whenever I
drop below 30%, that would be a tactic. Tactics are also rather trivial
to program, and a rudimentary set of tactics can be programmed with a
bunch of if (player does this) then (react with this action)
STRATEGY
Strategy is a little harder to pin down. I would say this is the overall
flow of your game. Another way of putting it is strategy is "A player's
guess at what set of tactics will work to defeat the opponent" How am I
going to win the game? Some are conservative and usually work fairly
well, some are risky, but can really hammer an unsuspecting opponent.
From an AI perspective, the first part of the problem is figuring out an
optimal strategy given an opponent's apparent tactics. This is an
EXTREMELY complex problem, and something I consider a very exciting part
of computer science. They couldn't even do it with Deep Blue, because
Kasparov's _tactics_ were designed to fool DB, and DB then chose a bad
strategy. I believe if anyone can really find a way to program a computer
so that it looks at a game, analyzes the opponont's moves and trys to
figure out what he is thinking--the person that can program this will
become a legend in the gaming industry ;-) Think about how many factors
you would have to consider. You would have to be able to quantitatively
describe how ``agressive'' an opponent is. How do you calculate that?
How do you decide what will fool a human being?
After your AI decides on a certain strategy, then you have to be able to
implement it. This would be done by deploying a number of tactics that
make up this strategy, but which ones? and when?
A truly successful AI would be able to evaluate its opponent's tactics,
come up with an overall strategy, and translate that strategy into tactics
of its own. A game that can do all this would be incredible.
I hope this sorta explained what I was talking about. Heh i didn't mean
for it to get this long, but once i start writing.... ;-)
= Ryan Drake = drake@cse.psu.edu =
http://www.cse.psu.edu/~drake
From woodcock@real3d.com Fri Jul 11 18:48:39 1997
Return-Path:
Received: from stargazer.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id SAA29540; Fri, 11 Jul 1997 18:48:38 -0400
Received: by stargazer.real3d.com (4.1/1.34.a)
id AA02847; Fri, 11 Jul 97 18:48:38 EDT
From: woodcock@real3d.com
Message-Id: <9707112248.AA02847@stargazer.real3d.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Fri, 11 Jul 1997 18:48:37 -0400 (EDT)
In-Reply-To: <802B50C269DECF11B6A200A0242979EF33CFA7@consulting.ensemble.net> from "David Pottinger" at Jul 11, 97 04:02:14 pm
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 4567
Status: RO
> I guess I'd chime in with the thought that, if applied correctly, a
> learning AI is a great thing.
Cool....we agree on that...
> I love cool AI, and you can't get much cooler than an AI that learns how
> to beat you by watching you and other people play the game. That's an
> AI I'd love to develop and love to play against. Playing humans (if
> they're any good:) constantly forces you to either refine/optimize or
> reevaluate/replace your strategies. If you always win easily, human
> nature will tend to make you want to play something else. A good
> learning AI helps fill that void when you aren't playing against other
> humans.
One topic brought up at the AI roundtables was how many developers
were being driven to put in multiplayer *instead of* good game AI.
Though in the minority (thankfully), some folks reported that their
management was far more interested in making sure a game was Internet
capable than in making it play a challenging solitaire game. IMO,
that's a poor vision of the future...as much as I enjoy online gaming
I like to *learn* the game alone for a bit first. Fortunately as
I said, that viewpoint didn't seem to be in the majority.
> For Age of Empires (Shameless plug: The recent winner of best realtime
> strat at E3:), we did do limited learning to augment a pretty
> straightforward approach to the AI. When you play any of our campaign
> scenarios the first time, the game is even. However, as a human, you
> carry over information about the scenario when you replay it the next
> time. So, we let the CPs do the same thing. They remember where you
> attacked them or they attacked you, etc. We also let the CPs remember
> your general playing tendencies so that they can improve playing against
> you in the randomly generated games. This has helped the quality of the
> AI out a lot. Well enough, in fact, that we'll be able to ship the game
> w/ an AI that doesn't cheat. Though, we may do a "Doom-style Nightmare
> mode" where the AI overtly cheats (by way of getting a resource boost at
> the start) just to pound on people who like that kind of thing.
This is *very* interesting Dave. Might I impose upon you for a more
detailed writeup so I can add it to my Games AI page (address below)?
If so, please email me at my "home" address (swoodcoc@concentric.net).
More on subject, I'm surprised that you actually found strategies to
be similar enough from game to game to make saving such information
useful. If I replay a given scenario (and I'll admit it has to
be pretty compelling for me to do so) I usually try something different
than I did the time before.
Your solution of saving general playing tendencies (which I presume
are things like types of units the player likes to build, battle
formations they prefer, etc.) for randomly generated games seems very
clever. The AI in C&C, for example (just to pick on the game we
picked on in the roundtable discussions) seems to be especially tuned
for the "canned" scenarios and flails somewhat when presented with
a new user-designed map.
> I guess I'd have to say that I've yet to see an AI that can't be beat by
> some strategy that the developers either didn't foresee or didn't have
> the time to code against. Learning is a great way to help alleviate
> that problem and thus create a better playing experience.
Agreed. Adaptation is one of the things promised in the large
online games (such as Ultima Online) and seems to me to be a natural
next step for AI in games.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/software.html (AI Software page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From woodcock@real3d.com Fri Jul 11 19:05:12 1997
Return-Path:
Received: from stargazer.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id TAA29901; Fri, 11 Jul 1997 19:05:10 -0400
Received: by stargazer.real3d.com (4.1/1.34.a)
id AA02884; Fri, 11 Jul 97 19:05:08 EDT
From: woodcock@real3d.com
Message-Id: <9707112305.AA02884@stargazer.real3d.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Fri, 11 Jul 1997 19:05:07 -0400 (EDT)
In-Reply-To: <4152F7B641AFCF11A49800805F680B3F3F0943@RED-36-MSG.dns.microsoft.com> from "Paul Nash" at Jul 11, 97 02:11:33 pm
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 4677
Status: O
> (Comments inline. Damn MS-mail screwed up reply format...)
I hate when that happens. That's why I only use Unix Elm (tm).
Command line is where it's at! ;)
> > On Fri, 11 Jul 1997, Steve Schonberger wrote:
> >
> > Does the game remember what it learned when you quit
> > the game and come back in?
> [Paul Nash]
> Most definitely it should remember that. The problem of
> training on player B and wiping out player A is not a problem at all,
> either. Don't a lot of games have saved game information? Is it so
> hard for the AI to save it's brain to disk? Not in a properly designed
> AI, I say. For instance, genetic algortihms should allow the saving of
> the genes to prescribe a persitent state, I would think. So save that
> for each player or each saved game.
In fact GAs were one of the methods of building a learning AI
discussed in the sessions, mostly with regards to the Creatures game
and its A-Life techniques.
One side aspect of this that somebody brought up (it might have
been me, but I'm not sure) is that by having the AI "brain" loaded
rather than coded you can release add-on expansion packs for the game
containing additional AIs. Better yet (separate topic), if the AI
code is accessible to the player (ala Quake-C), then you can sponsor
contests amongst players to see who can develop the most devious
AIs. The best 10 get released on a CD, or posted to a web site.
> >
> > (Ryan says games should throttle back to match the players' level)
> >
> [Paul Nash]
> Sorry Ryan, but I 100% disagree. You've just shot down your own
> argument here, because you say that AI should level off to match the
> player, but you say that the player levels off because there's noone
> left to challenge them. BUT, the AI IS the challenger. If the AI is
> always challenging a person, then they won't necessarily level off,
> right? I don't consider this to be like weightlifting, because it's the
> human brain, which is an entirely different kind of muscle. :)
I don't know about that, Paul. I can see value in adjusting the AI
so that it's always a bit tougher than the player is. I remember the
first few times I played C&C I was sweating bullets over the AI, but
once my expertise got high enough he ceased to become a threat in
all but the most unbalanced scenarios.
The trick is capturing parameters which you can use to accurately
judge how experienced a player is.
> That said, the AI should definitely *track* the player, and not
> necessarily always be better than the human. If the AI sense the human
> is thoroughly getting his butt kicked regularly, the AI is too strong
> and it should back off a little to let the human "catch up."
Er....isn't that what Ryan said?
> If you
> could design an AI that can detect specific gameplay deficiencies in the
> human and somehow adapt to them, that would be very cool.
Genetic algorithsm have a possibility here, provided the game itself
is "big enough" to permit them time to evolve (they can be notoriously
slow). Consider a space strategy game ala MOO, in which individual
ship designs are modified by the AI over time based on those ships
which do well against the player. If you the player tend to build
lots of fighters and carriers, then gradually over time the AI will
adapt to that by building ships based on surviving ship types
(those that have more anti-fighter defenses).
> Of course, I am not suggesting that any of this is really easy,
> but all of it has potential.
That's why they pay us the big bucks! (Well, it's rumored some folks
get big bucks...I wouldn't know..... ;).
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/software.html (AI Software page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From stevesch@csealumni.UNL.edu Fri Jul 11 18:37:04 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id SAA29367; Fri, 11 Jul 1997 18:37:04 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id SAA22083; Fri, 11 Jul 1997 18:36:59 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id SAA20046
for woodcock@real3d.com; Fri, 11 Jul 1997 18:36:02 -0400 (EDT)
Resent-Date: Fri, 11 Jul 1997 18:36:02 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707112235.RAA11449@smtp.gte.net>
From: "Steve Schonberger"
To:
Subject: Re: Game AI
Date: Fri, 11 Jul 1997 15:31:33 -0700
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1161
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"JpIIqC.A.30E.8Trxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/314
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 7858
Status: RO
I wrote:
> > At the Computer Game Developers' Convention, I attended a game AI
> > round-table (one of Steven Woodcock's sessions). An interesting topic
that
> > came up was whether it is a good idea for game AI to learn while
playing a
> > single human player. It seemed to me that there were more people
> > advocating non-learning AI models, but a few people did think learning
was
> > a good thing.
From: Ryan T Drake
Date: Friday, July 11, 1997 1:23 PM:
> I would say that adaptive AI would make a really good option in most
> cases, but it also would depend on the game. Something else that would
> need consideration: Does the game remember what it learned when you quit
> the game and come back in?
This is a point Steven Woodcock brought up. One answer is to have a "who
are you?" page as part of the startup, so that the computer doesn't
instantly kill player two because player one has worked up to a high skill
level. That's annoying, but fixes the problem. An "I'm the only one who
plays this game on this computer" option could be used to avoid asking the
problem unnecessarily.
This isn't really an AI issue, though, because any program that has
multiple levels (game or not) has to deal with multiple users unless
there's only one meaningful configuration. It's just as important for a
word processor to keep track of user one wanting technical manual mode and
user two wanting personal letter mode as it is for a game to know user one
is on level 99 and user two is on level 2.
So, new topic (please change the subject line if you run with it): What's
the best way to manage keeping track of user settings (preferences,
difficulty, etc.) on a computer shared by more than one user?
> My feeling is, you can tell what is good AI if the computer's actions
> look like the way a human would play. For instance: If I am playing
> Quake, and i see a monster trying to follow me around the corner and
> hitting a wall, i would not consider that good AI. On the other hand, if
> i had a monster after me that hits every shot and follows me perfectly, i
> would also not consider that good AI. Reason being, I know a human
cannot
> possibly be dumb enough to try to follow me through walls, and I also
know
> a human cannot possibly be good enough to hit each shot and follow
> perfectly. In essence, to have good artificial intellegence, you also
> need artificial stupidity.
Although this is very important, this is more a play balance issue than an
AI issue. How do you deal with this? Maybe by starting with a few enemies
and adding more at higher levels. Maybe by giving the enemies tougher
equipment or attributes. Maybe by giving them smarter AI.
> One idea of implementing this kind of AI is to start by giving the
> computer only as much information as a human would get. For example,
> sticking with the Quake idea. Only give the computer information about
> its surroundings, and not give it access to where everything is on the
> map. If a normal player would get sound cues give the computer ai sound
> cues.
I think this was another topic in the AI round-table. Do we make the AI
enemies play by exactly the same rules and information a player uses? (In
chess, of course, other games, maybe.) Or do we make up for weaknesses in
the enemies' AI by giving them "hints from God"?
In short, does computer-controlled player cheat or play fair?
> Another important thing to give any AI is reaction time. No human being
> has perfect 0ms reaction time. In a ``deathmatch'' for instance, a human
> being could take anywhere between 100-500ms to react to events on the
> screen. Add this factor in when calculating how the computer reacts to
> things. Reasonable AI can vary given a user-selected ``skill level'' but
> it should always be beatable.
Play balance.
> Now to your original point... Yes, I think ideally AI should be adaptive,
> so long as its adaptiveness tops off at a certain point. Anyone who
> considers themself very good at a certain game can relate to what I mean.
> Eventually the learning curve just stops and you are at a point that
> you can't really get much better...Usually this is when you can beat most
> other players and there is no one better to challenge you. This should
> also happen with AI. There should be a point where the AI stops getting
> better. As soon as it looks like the AI is doing a certain amount better
> than the human player, it should shut off and not learn anything new.
> Then as the human player starts getting better you can turn the learning
> on again.
Again, play balance.
What I was getting at in my note was not about how to achieve play balance.
I was talking about whether an AI enemy "learns". Consider these examples
for a computer chess player:
1 (non-learning). At level 1, the computer plays one of a few well-known
openings, and analyzes moves only a few levels deep, with a weak set of
pre-defined move-evaluation settings. At level 2, it expands its set of
openings, analyzes moves a little deeper, and uses a stronger set of
pre-defined move-evaluation settings. At the top level, it uses a huge
library of openings, analyzes moves as deeply as time allows, and uses the
strongest set of move-evaluation settings the developer has been able to
pre-calculate.
2 (learning). Initially, the computer plays with a complete library of
openings, but has settings that tell it to choose the most familiar
openings. It always analyzes moves as deeply as time allows, but starts
with a weak set of move-evaluation settings. Where its moves produce
favorable results, it adjusts its move-evaluation settings to do more of
the same, and where its moves produce unfavorable results, it adjusts its
settings to try something different.
Example 1 increases in difficulty, but doesn't "learn". It can play at a
user-selected difficulty level, or advance in level when it counts the
player winning a lot. But the way it increases difficulty is by
substituting a _pre-defined_ set of stronger move-evaluation settings.
Example 2, by contrast, also increases in difficulty, by "learning". If it
finds that the player usually wins when it opens P-K4, but it usually wins
when it opens P-Q4, it plays P-Q4 more, and vice-versa. If it usually wins
by trading lots of pieces, it tries to trade pieces more often. Basically,
it adapts to what works.
Each has good and bad points. Example 1 is likely to be easier to build,
which is a big plus. On the down-side, if its pre-defined settings have a
critical flaw that's hard enough to find that play-testers miss it, but
players find it, then any player that discovers (or hears about it) that
flaw can beat the computer with a shortcut. Example 2 can deal with that
sort of flaw by learning that it doesn't want to leave that sort of hole
open.
But example 2 has its weaknesses too, most notably being harder to program.
A more subtle one is that the human player can teach the computer to play
worse, by intentionally playing badly in response to the computer player's
mistakes. If the human chess player dropped pieces or resigned every time
the computer player left the queen unguarded, the computer player might
start to think that leaving the queen unguarded was a good strategy.
Another complication of the learning model is that it may be difficult to
measure what results are good and what aren't. Did the computer player win
game 1 and lose game 2 because it played smarter in game 1? Or was it
because the human player played smarter in game 2? What did it do
differently that made it play worse, and what made it play better?
So, what's good about using pre-calculated strategy settings? What's good
about measuring results, and ajusting the strategy settings during or
between games?
Steve Schonberger
From woodcock@real3d.com Fri Jul 11 19:13:24 1997
Return-Path:
Received: from stargazer.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id TAA00042; Fri, 11 Jul 1997 19:13:19 -0400
Received: by stargazer.real3d.com (4.1/1.34.a)
id AA02904; Fri, 11 Jul 97 19:13:19 EDT
From: woodcock@real3d.com
Message-Id: <9707112313.AA02904@stargazer.real3d.com>
Subject: Re: Game AI
To: nshaf@intur.net
Date: Fri, 11 Jul 1997 19:13:18 -0400 (EDT)
In-Reply-To: <33C6A704.104D@intur.net> from "Nick Shaffner" at Jul 11, 97 04:35:00 pm
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 2385
Status: O
> Agreed, in addition, one could design the AI with the primary purpose
> of making the game fun, rather than victory. This certainly be a more
> difficult task, but (for example) by assessing the users levels of
> interaction with the game, the AI could forcibly create incidents to
> break up 'slow' periods - or perhaps not attack the users when he's down
> or running low on resources, etc...
I like the idea of the AI somehow gauging the "pace" of the game,
provided it's a game that makes that a sensible thing to do. For
Dungeon Keeper, for example, it might be difficult to launch an assault
just to keep things moving, especially if the AI and the player haven't
had contact yet. Similarly, one can easily imagine a creature or
two in Quake leaping out to harass the player when he's been sitting in
one place too long.
Question though: Is that "cheating"? One can argue it is, or one
can argue it provides for better gameplay.
> It can also help make the game more user extensible,
> for example in Mission to Nexus Prime (plug, plug) - the user can
> actually design/create completely new types of units, and learning is
> essential in order to get the AI to the point where it can use them
> effectively...
Nick, I ask you the same thing I asked Dave in an earlier post.
I'd love to hear more info on this so I can add it to my Games AI
page. I think a lot of folks would be interested.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From stevesch@csealumni.UNL.edu Fri Jul 11 19:29:22 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id TAA00266; Fri, 11 Jul 1997 19:29:20 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id TAA22195; Fri, 11 Jul 1997 19:29:18 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id TAA26723
for woodcock@real3d.com; Fri, 11 Jul 1997 19:28:25 -0400 (EDT)
Resent-Date: Fri, 11 Jul 1997 19:28:25 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707112328.SAA24642@smtp.gte.net>
From: "Steve Schonberger"
To:
Subject: Re: Game AI
Date: Fri, 11 Jul 1997 16:27:11 -0700
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1161
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"WyejfB.A.RfG.mFsxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/317
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2826
Status: RO
> From: woodcock@real3d.com
> Date: Friday, July 11, 1997 3:48 PM
> One topic brought up at the AI roundtables was how many developers
> were being driven to put in multiplayer *instead of* good game AI.
> Though in the minority (thankfully), some folks reported that their
> management was far more interested in making sure a game was Internet
> capable than in making it play a challenging solitaire game. IMO,
> that's a poor vision of the future...as much as I enjoy online gaming
> I like to *learn* the game alone for a bit first. Fortunately as
> I said, that viewpoint didn't seem to be in the majority.
Since I'm working on a massively multi-user game, my AI opponents are there
just to train you so that you can handle the controls when you come upon
human opponents. There may be some AI enemies that are so nasty that you
can only hope to beat them by rounding up several human players and ganging
up on them, but for the most part I just want the AI to do interesting
stuff, not necessarily to make them extra-tough. A common question is,
"How do you make an AI player tough?" Well, I also want to ask, "How do
you give an AI player 'personality'?"
> > I guess I'd have to say that I've yet to see an AI that can't be beat
by
> > some strategy that the developers either didn't foresee or didn't have
> > the time to code against. Learning is a great way to help alleviate
> > that problem and thus create a better playing experience.
>
> Agreed. Adaptation is one of the things promised in the large
> online games (such as Ultima Online) and seems to me to be a natural
> next step for AI in games.
For a game like Ultima Online, adaptation doesn't have to be done by
"learning". It can be done by the game operators noticing that the
werewolves always kill human players lacking silver weapons, and always die
against silver-armed humans, and either pulling werewolves out of the game
or resetting their programming (AI, attributes, and where they show up).
In other words, adaptation can be accomplished by just changing something
that's broken on the server, so as soon as the operators see something
wrong, they can make the server "adapt". A stand-alone game doesn't have
that option, except by offering updates on the publisher's web site.
That suggests a contrast between "smart" AI and AI with "personality".
"Smart" werewolves would go into towns in human form and find another town
if anyone had silver (possibly testing that by sending in a "dumb"
werewolf), but declare dinner time if no one had silver. But that would
just massacre such towns, which isn't fun. A werewolf with "personality"
might seek out towns with lots of silver weapons, either for the challenge,
or seeking release from their curse. Which is more fun?
Steve Schonberger
From nshaf@intur.net Fri Jul 11 23:19:35 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id XAA04050; Fri, 11 Jul 1997 23:19:34 -0400
Received: from www.intur.net by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id XAA22614; Fri, 11 Jul 1997 23:19:19 -0400
Received: from grim by www.intur.net via ESMTP (940816.SGI.8.6.9/940406.SGI)
for id WAA06093; Fri, 11 Jul 1997 22:20:47 -0500
Message-Id: <199707120320.WAA06093@www.intur.net>
From: "Nick Shaffner"
To:
Subject: Re: Game AI
Date: Fri, 11 Jul 1997 22:18:10 -0500
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1161
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Length: 3479
Status: RO
I knew I knew you from somewhere, you've got that nifty 'You know your
game is in trouble when' with a couple of my quotes on it :-)
> I like the idea of the AI somehow gauging the "pace" of the game,
> provided it's a game that makes that a sensible thing to do. For
> Dungeon Keeper, for example, it might be difficult to launch an assault
> just to keep things moving, especially if the AI and the player haven't
> had contact yet. Similarly, one can easily imagine a creature or
> two in Quake leaping out to harass the player when he's been sitting in
> one place too long.
> Question though: Is that "cheating"? One can argue it is, or one
> can argue it provides for better gameplay.
I'll choose gameplay instead of "not cheating" if I think 90% of my
players aren't going to notice or care about the cheating (given it doesn't
actually hurt gameplay).
For example:
Letting the enemy units pathfind around the exact edge of the known human
units' visible range would be fine with me ( in order to enable some of my
eakiness' algo - which makes the game significantly more fun )
Letting the AI's have a much higher probability of allying with each other
whenever they notice human player's allying seems acceptable, as it would
keep the game more even/challenging
Giving the AI a cash boost when it's loosing, or allowing it to rebuild
buildings might allow the AI to last longer, but I think this sort of
visible cheating is what most annoys players and is quite detrimental to
gameplay...
> > It can also help make the game more user extensible,
> > for example in Mission to Nexus Prime (plug, plug) - the user can
> > actually design/create completely new types of units, and learning is
> > essential in order to get the AI to the point where it can use them
> > effectively...
>
> Nick, I ask you the same thing I asked Dave in an earlier post.
> I'd love to hear more info on this so I can add it to my Games AI
> page. I think a lot of folks would be interested.
Well, the unit utilization AI uses a couple of techniques, I use the
"boxes" techniques for determining which types units are most effective
against other types units (essentially genetic programming without the
genes :) ) - this way new units can be created and the AI can use them and
learn their effective value, as well as adapt to their altered value with
different players' 'styles' (concerning overall unit selection and usage)
The original value for each unit is computed from the unit's cost,
mobility, firepower, etc, - and originally initialized to the 'economically
optimal' state - ex. the computer will tend to build the units that give it
the most bang/mobility for the buck... It is incrementally modified as the
AI learns about the units effective average lifespan, offensive/defensive
%, % utilization, effected influence, effective influence and actual
influence, etc against each of the other types of units... This works out
suprisingly well (for such a simple algo), and adapts extremely quickly in
our game ( which has a high unit turnover ). It also allows for some quite
useful comparisons against 'factory norms' which allow the AI to make
guesses as to when it might be getting too hard for the player... It also
has the extremely visible side effect of allowing the AI to pull back into
a 'fuzzy' defensive position.
:) Nick
--
Nick Shaffner
Technical Director
DigiFX Interactive.
http://users.intur.net/~nshaf/
http://www.digifx.net/
From ollanes@accesspro.net Sat Jul 12 00:59:50 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id AAA05581; Sat, 12 Jul 1997 00:59:49 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id AAA22857; Sat, 12 Jul 1997 00:59:48 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id AAA25212
for woodcock@real3d.com; Sat, 12 Jul 1997 00:58:55 -0400 (EDT)
Resent-Date: Sat, 12 Jul 1997 00:58:55 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707120457.AAA24922@mail.digiweb.com>
Reply-To:
From: "Orlando Llanes"
To:
Subject: Re: Game AI
Date: Sat, 12 Jul 1997 00:49:00 -0400
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1155
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"IY_SYB.A.wFG.T7wxz"@mail>
Resent-From: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/318
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 1415
Status: RO
I like the AI from X-Com. I'm not AI expert, but the X-Com AI looks to be a
bit complicated. For example, if you shoot at the alien, they move out of
the way after the shots are fired. If you were spotted by them, and then
you run back for cover, they will look for you. They leave their
fallen/landed spacecraft and seek buildings from which to shoot from high
above. Etc.
Another cool thing about X-Com is that the soldiers don't fire accurately
on the first mission, they gradually get more accurate.
I also like the AI from DOOM where if you're seen, the aliens will pursue
you and open doors if they can. One time, I heard a bulldemon (or whatever
they're called), I was going crazy trying to figure out where it was! As it
turns out, it poped up right in front of me while I was waiting for the
elevator. I shot at it from a split-second reaction.
The kind of AI I like is one where the enemy can surprise you, even if you
know their pattern of attack. The other thing that's cool is how the
character interacts with the environment. For Example, I was playing X-Com
today, and I positioned a couple of my soldiers behind some trees, on the
alien's turn, one of my soldiers spotted them and was shot at, but the
soldier was safe because the shot hit the log.
Sorry if I'm off-topic, or if I've mentioned something already said, but I
was just writing down some random thoughts :)
See ya!
Orlando Llanes
From kyhui@netvigator.com Sat Jul 12 04:36:53 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id EAA09176; Sat, 12 Jul 1997 04:36:52 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id EAA23230; Sat, 12 Jul 1997 04:36:50 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id EAA27309
for woodcock@real3d.com; Sat, 12 Jul 1997 04:35:30 -0400 (EDT)
Resent-Date: Sat, 12 Jul 1997 04:35:30 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <33C740C2.96FF813C@netvigator.com>
Date: Sat, 12 Jul 1997 16:30:58 +0800
From: Hui Ka Yu
Reply-To: kyhui@netvigator.com
X-Mailer: Mozilla 4.01 [en] (Win95; I)
MIME-Version: 1.0
To: gamedesign@mail.digiweb.com
Subject: Re: Game AI
X-Priority: 3 (Normal)
References: <199707111920.OAA20395@smtp.gte.net>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"aU5Or.A.coG.HG0xz"@mail>
Resent-From: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/319
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 1612
Status: RO
The discussion just reminded me some of the "learning" ai attempts.
Remeber MK3 for PC? I do think that it adopts a learning ai based on a
scoring system for different moves. When a move is effective/defective,
the opponent will perform the move more/less accordingly. (somebody
mentioned it in the discussion)
Sounds good for a duel game, yet the result was disappointing, coz the
so-called "learning" is _so_ obvious to players.(can't describe in
words, but you will know if you've tried it) The computer players turn
out to be more _unnatural_ than non-learning opponents, let alone human
players, and playing with them was really IMHO no fun. It might beat the
player, but player won't find it FUN. A human player won't learn that
way.
So it seems that making a learning ai with this "good result->do more,
bad results->do less" sounds easier than it should be. To make it look
natural, it think you need to define vast amount of alternatives first
and perform complex adoptations so that players cannot recognize them
all and find it _so_ artificial and impractical. e.g. in a C&C type
game, when a player attack more often by sea, the computer adjust more
fund to sea defence. A player can cheat the ai simply by attack 10 times
(hoax) by sea and then crush the computer with land unit with ease
_every_ time he plays. That sounds really bad and no fun! A human player
will suspect that this is only part of a tactics and won't fall into
such trap - and this disobey the "good results->do more" ai system.
Just some comments....
Psycloid kyhui@netvigator.com
Afods - freelance game developer team
From condor@neotechonline.com Sat Jul 12 13:25:47 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id NAA17238; Sat, 12 Jul 1997 13:25:46 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id NAA23794; Sat, 12 Jul 1997 13:25:40 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id NAA12186
for woodcock@real3d.com; Sat, 12 Jul 1997 13:24:47 -0400 (EDT)
Resent-Date: Sat, 12 Jul 1997 13:24:47 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707121723.NAA11696@mail.digiweb.com>
X-Mailer: Microsoft Outlook Express 4.71.0544.0
From: "John Vanderbeck (NeoTECH)"
To:
Subject: Re: Game AI
Date: Sat, 12 Jul 1997 11:29:36 -0500
X-Priority: 3
X-MSMail-Priority: Normal
MIME-Version: 1.0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mimeole: Produced By Microsoft MimeOLE Engine V4.71.0544.0
Resent-Message-ID: <"g9-enB.A.a3C.O27xz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/322
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 4662
Status: RO
My comments are inline
Thanks,
John Vanderbeck
Lead Programmer - NeoTECH Online
condor@neotechonline.com
http://www.neotechonline.com
GAME DESIGN mailing list:
Email gamedesign-request@digiweb.com , SUBJECT subscribe
----
From: Ryan T Drake
To: gamedesign@mail.digiweb.com
Date: Friday, July 11, 1997 3:24 PM
Subject: Re: Game AI
>On Fri, 11 Jul 1997, Steve Schonberger wrote:
>
>> At the Computer Game Developers' Convention, I attended a game AI
>> round-table (one of Steven Woodcock's sessions). An interesting topic
that
>> came up was whether it is a good idea for game AI to learn while playing
a
>> single human player. It seemed to me that there were more people
>> advocating non-learning AI models, but a few people did think learning
was
>> a good thing.
>
>I would say that adaptive AI would make a really good option in most
>cases, but it also would depend on the game. Something else that would
>need consideration: Does the game remember what it learned when you quit
>the game and come back in?
>
>My feeling is, you can tell what is good AI if the computer's actions
>look like the way a human would play. For instance: If I am playing
>Quake, and i see a monster trying to follow me around the corner and
>hitting a wall, i would not consider that good AI. On the other hand, if
>i had a monster after me that hits every shot and follows me perfectly, i
>would also not consider that good AI. Reason being, I know a human
cannot
>possibly be dumb enough to try to follow me through walls, and I also
know
>a human cannot possibly be good enough to hit each shot and follow
>perfectly. In essence, to have good artificial intellegence, you also
>need artificial stupidity.
[John Vanderbeck]
I completely agree with this statement. It is important to realize that
creating an artificial intelligence doesn't necesarily mean creating a
super-smart computerized brain with the knowledge to conquer the world.
Maybe in the scope of gaming, Artificial Intelligence should be better
known as Artificial Humanity. Our goal is to create a realistic HUMAN
opponent that is controlled by the computer but otherwise not noticable as
the computer. Our goal is _not_ to create a COMPUTER player. IMHO.
>
>One idea of implementing this kind of AI is to start by giving the
>computer only as much information as a human would get. For example,
>sticking with the Quake idea. Only give the computer information about
>its surroundings, and not give it access to where everything is on the
>map. If a normal player would get sound cues give the computer ai sound
>cues.
[John Vanderbeck]
This is very interesting, and comething I have always wondered about as I
played games. Sometimes it seems very obvious that the computer is making
decisions based on information it should _not_ know. I think this has a
tendancy to irrate the player.
>
>Another important thing to give any AI is reaction time. No human being
>has perfect 0ms reaction time. In a ``deathmatch'' for instance, a human
>being could take anywhere between 100-500ms to react to events on the
>screen. Add this factor in when calculating how the computer reacts to
>things. Reasonable AI can vary given a user-selected ``skill level'' but
>it should always be beatable.
[John Vanderbeck]
I don't have alot of experience in AI or AH as I stated above, but wouldn't
this point be rather mute if your AH routine was complex enough to
reasonably simulate humanity? I would think that just running through a
complex routine would create the required delay. I don't see how you could
run the AH routine in 0ms :)
>
>Now to your original point... Yes, I think ideally AI should be adaptive,
>so long as its adaptiveness tops off at a certain point. Anyone who
>considers themself very good at a certain game can relate to what I mean.
>Eventually the learning curve just stops and you are at a point that
>you can't really get much better...Usually this is when you can beat most
>other players and there is no one better to challenge you. This should
>also happen with AI. There should be a point where the AI stops getting
>better. As soon as it looks like the AI is doing a certain amount better
>than the human player, it should shut off and not learn anything new.
>Then as the human player starts getting better you can turn the learning
>on again.
[John Vanderbeck[
Essentially correct. I beleive the AH should always be just a few steps
ahead of the player, to give them a growing challenge. Just like
weight-lifting.
>
>
>
>= Ryan Drake >= drake@cse.psu.edu >=
http://www.cse.psu.edu/~drake >
From condor@neotechonline.com Sat Jul 12 13:25:48 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id NAA17242; Sat, 12 Jul 1997 13:25:47 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id NAA23798; Sat, 12 Jul 1997 13:25:46 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id NAA12197
for woodcock@real3d.com; Sat, 12 Jul 1997 13:24:52 -0400 (EDT)
Resent-Date: Sat, 12 Jul 1997 13:24:52 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707121723.NAA11689@mail.digiweb.com>
X-Mailer: Microsoft Outlook Express 4.71.0544.0
From: "John Vanderbeck (NeoTECH)"
To:
Subject: Re: Game AI
Date: Sat, 12 Jul 1997 11:31:07 -0500
X-Priority: 3
X-MSMail-Priority: Normal
MIME-Version: 1.0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mimeole: Produced By Microsoft MimeOLE Engine V4.71.0544.0
Resent-Message-ID: <"OcusQC.A.C3C.J27xz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/321
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 5174
Status: RO
These "Genetic Algorithims" sound very interesting. Where can I find out
more about them?
Thanks,
John Vanderbeck
Lead Programmer - NeoTECH Online
condor@neotechonline.com
http://www.neotechonline.com
GAME DESIGN mailing list:
Email gamedesign-request@digiweb.com , SUBJECT subscribe
----
From: woodcock@real3d.com
To: gamedesign@mail.digiweb.com
Date: Friday, July 11, 1997 6:06 PM
Subject: Re: Game AI
>> (Comments inline. Damn MS-mail screwed up reply format...)
>
> I hate when that happens. That's why I only use Unix Elm (tm).
>Command line is where it's at! ;)
>
>> > On Fri, 11 Jul 1997, Steve Schonberger wrote:
>> >
>> > Does the game remember what it learned when you quit
>> > the game and come back in?
>> [Paul Nash]
>> Most definitely it should remember that. The problem of
>> training on player B and wiping out player A is not a problem at all,
>> either. Don't a lot of games have saved game information? Is it so
>> hard for the AI to save it's brain to disk? Not in a properly designed
>> AI, I say. For instance, genetic algortihms should allow the saving of
>> the genes to prescribe a persitent state, I would think. So save that
>> for each player or each saved game.
>
> In fact GAs were one of the methods of building a learning AI
>discussed in the sessions, mostly with regards to the Creatures game
>and its A-Life techniques.
>
> One side aspect of this that somebody brought up (it might have
>been me, but I'm not sure) is that by having the AI "brain" loaded
>rather than coded you can release add-on expansion packs for the game
>containing additional AIs. Better yet (separate topic), if the AI
>code is accessible to the player (ala Quake-C), then you can sponsor
>contests amongst players to see who can develop the most devious
>AIs. The best 10 get released on a CD, or posted to a web site.
>
>> >
>> > (Ryan says games should throttle back to match the players' level)
>> >
>> [Paul Nash]
>> Sorry Ryan, but I 100% disagree. You've just shot down your own
>> argument here, because you say that AI should level off to match the
>> player, but you say that the player levels off because there's noone
>> left to challenge them. BUT, the AI IS the challenger. If the AI is
>> always challenging a person, then they won't necessarily level off,
>> right? I don't consider this to be like weightlifting, because it's
the
>> human brain, which is an entirely different kind of muscle. :)
>
> I don't know about that, Paul. I can see value in adjusting the AI
>so that it's always a bit tougher than the player is. I remember the
>first few times I played C&C I was sweating bullets over the AI, but
>once my expertise got high enough he ceased to become a threat in
>all but the most unbalanced scenarios.
>
> The trick is capturing parameters which you can use to accurately
>judge how experienced a player is.
>
>> That said, the AI should definitely *track* the player, and not
>> necessarily always be better than the human. If the AI sense the human
>> is thoroughly getting his butt kicked regularly, the AI is too strong
>> and it should back off a little to let the human "catch up."
>
> Er....isn't that what Ryan said?
>
>> If you
>> could design an AI that can detect specific gameplay deficiencies in
the
>> human and somehow adapt to them, that would be very cool.
>
> Genetic algorithsm have a possibility here, provided the game itself
>is "big enough" to permit them time to evolve (they can be notoriously
>slow). Consider a space strategy game ala MOO, in which individual
>ship designs are modified by the AI over time based on those ships
>which do well against the player. If you the player tend to build
>lots of fighters and carriers, then gradually over time the AI will
>adapt to that by building ships based on surviving ship types
>(those that have more anti-fighter defenses).
>
>> Of course, I am not suggesting that any of this is really easy,
>> but all of it has potential.
>
> That's why they pay us the big bucks! (Well, it's rumored some folks
>get big bucks...I wouldn't know..... ;).
>
>
>
>Steve
>
From mark_a@cix.compulink.co.uk Sat Jul 12 23:05:02 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id XAA24382; Sat, 12 Jul 1997 23:05:00 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id XAA24486; Sat, 12 Jul 1997 23:04:59 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id XAA20001
for woodcock@real3d.com; Sat, 12 Jul 1997 23:03:53 -0400 (EDT)
Resent-Date: Sat, 12 Jul 1997 23:03:53 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Date: Sun, 13 Jul 97 04:02 BST-1
From: mark_a@cix.compulink.co.uk (Mark Atkinson)
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Cc: mark_a@cix.compulink.co.uk
Reply-To: mark_a@cix.compulink.co.uk
Message-Id:
Resent-Message-ID: <"JYsV1.A.f0E.OVEyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/324
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 1006
Status: RO
In-Reply-To: <199707121723.NAA11689@mail.digiweb.com>
> These "Genetic Algorithims" sound very interesting. Where can I find
> out
> more about them?
A web search should turn up plenty of info and sample code.
Alternatively, the canonical introductory text is "Genetic Algortihms in
Search, Optimization & Machine Learning", David E Goldberg,
Addison-Wesley 1989, ISBN 0-201-15767-5. Note he uses 'Roulette Wheel'
selection, which is IMHO a bad thing; most modern implementations use
ranking of some sort. There's also comp.ai.genetic.
I'd echo the sentiments about trying to create a fun-to-beat opponent,
rather than a Computer Brain(TM). This is the primary reason to avoid
rule-based systems IMHO.
-=Mark=-
Mark Atkinson Voice: +44 171 828 6990
Technical Director Fax: +44 171 828 6997
Computer Artworks Ltd. Mail: mark@artworks.co.uk
London, UK. Web:
http://www.artworks.co.uk
From rick@polylang.com Mon Jul 14 05:12:48 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id FAA15126; Mon, 14 Jul 1997 05:12:48 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id FAA27041; Mon, 14 Jul 1997 05:12:11 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id FAA29059
for woodcock@real3d.com; Mon, 14 Jul 1997 05:11:15 -0400 (EDT)
Resent-Date: Mon, 14 Jul 1997 05:11:15 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <3.0.32.19970724095734.00683d88@MAILHOST>
X-Sender: rick@MAILHOST
X-Mailer: Windows Eudora Pro Version 3.0 (32)
Date: Thu, 24 Jul 1997 10:05:05 +0100
To: gamedesign@mail.digiweb.com
From: rick cronan
Subject: Re: Game AI
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Resent-Message-ID: <"0z0vPC.A.eAH.bzeyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/326
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2084
Status: RO
At 17:48 11/07/97 -0400, drake@cse.psu.edu wrote:
>
>There are certain things that make a Quake player (or any other game
>player) ``good.'' I will break it down into 3 categories for now: skill,
>tactics, and strategy.
I think there's a fourth and extremely relevant category (which was kind of
touched on last week or the week before on this list) which is that of
knowledge. In my experience, the Quake death-match winners are not
necessarily the most skilled, or those with the best tactics and
strategies, but those who know the levels best, those who know where to get
the 200 armour, the Quad-damage, etc.
Those kind of players (in single-player mode) are the one your learning AI
would be targetted at, because for the game to challenge them, it would
need to combat their knowledge that around corner X there are three
monsters of type Y, two health packs and some rockets.
For someone playing a level for the first time, the monsters don't
necessarily have to act in such a way as to defeat your style. You'll get
fragged because you don't know what's around the next corner, not because
the monsters 'know' your tactics.
Which also brings me on to my other point. The learn / not learn dilemma
should also be looked at in terms of what the game is, especially if the
player can discern its presence. From an internal logic point of view, a
C&C type game can have a learning AI because it represents a long,
drawn-out war in which an enemy could learn your tactics, and come up with
counters. However, in your Quake type game it is less plausible that the
enemies would have time to learn from your actions.
The only exception to this IMHO is what has already been mentioned, and
that is where learning AI is used to promote entertainment, rather than
simply to help the computer player win.
| rick cronan | email: rick@polylang.com |
| production manager | phone: +44 (0) 114 267 0017 |
| cool beans productions ltd | fax: +44 (0) 114 268 7487 |
| url:
http://www.polylang.com/polylang2/Coolbeans/home.htm |
From Swoodcoc@concentric.net Mon Jul 14 10:57:51 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id KAA24266; Mon, 14 Jul 1997 10:57:50 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id KAA00186; Mon, 14 Jul 1997 10:57:14 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id KAA17019
for woodcock@real3d.com; Mon, 14 Jul 1997 10:56:15 -0400 (EDT)
Resent-Date: Mon, 14 Jul 1997 10:56:15 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: Swoodcoc@concentric.net
Message-Id: <199707141455.KAA12971@galileo.cris.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Mon, 14 Jul 1997 10:55:06 -0400 (EDT)
In-Reply-To: from "Ryan T Drake" at Jul 11, 97 05:48:53 pm
X-Mailer: ELM [version 2.4 PL25]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"I2TKmD.A.pFE.E3jyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/327
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 5454
Status: RO
Apologies to all ahead of time for the length of this post....Ryan had
some good points that I'm loath to trim out.
> On Fri, 11 Jul 1997 woodcock@real3d.com wrote:
> > I'm curious how you might spot that the human player has "leveled out"?
> > Taking Quake as an example, what criteria might you use? Kills vs.
> > shots? Average time playing a level? Some combination of the two might
> > suffice....that's an interesting problem.
>
> There are certain things that make a Quake player (or any other game
> player) ``good.'' I will break it down into 3 categories for now: skill,
> tactics, and strategy. Although they may sonund the same they have very
> different meanings when you look at them from an AI perspective...
>
> SKILL
>
> I consider ``skill'' to be mastery of a game's physical interaction. For
> instance reaction time would be a skill. The ability to perform complex
> maneuvers with the joystick or keyboard would also be a skill. Think
> about Command and Conquer--the ability to keep two different sets of tanks
> driving in two different directions (with keyboard shortcuts or by
> flipping back and forth between the two groups) would also be a skill.
> This is the easy part of programming AI. We start with the assertion that
> a computer has 100% skill. Normally there is no reaction time and the
> computer has as many fingers as it needs to control itself and its
> character. By adding code to modify the computer's reaction time, or
> limiting the number of things it can keep track of, you are lowering the
> computer's SKILL to a human's level.
Okay, this is simple enough. Most games today, I'd wager, manipulate
these values to achieve "tougher" (note I didn't say better) gameplay.
I would pick a nit and say that boiling skill down to nothing but reaction
times is a bit of a simplification, but I understand what you're driving
at here.
> TACTICS
>
> Tactics describe a player's specific reactions to the game environment and
> to his or her opponents. In Command and Conquer, if my opponent starts
> driving 20 tanks in the direction of my base, I will react by erecting a
> few turrets and sending a force of my own out. Tactics vary considerably
> from player to player. They can be thought of as a person's style of
> gameplay. In Quake if I make a habit of going for health boxes whenever I
> drop below 30%, that would be a tactic. Tactics are also rather trivial
> to program, and a rudimentary set of tactics can be programmed with a
> bunch of if (player does this) then (react with this action)
This is a bit shakier as tactics vary so widely from game to game.
One can easily come up with tactics for combined arms in a game like
C&C, for example, but I'm not so sure about a game like Quake. What
are the tactics there....hide when you're being shot at (though in many
3D POV games that *would* be a great improvement)? The very rapidly
changing environment that make up 3D POV games can make this partiuclar
element difficult to pin down.
Tactics are also something the AI could learn from the player, which
is something I'd like to see it try to do. If the AI sees that 2 flamethrowers
and 2 hovertanks seems to work well as an assalt force for the player, then
maybe it ought to consider adding that same formation to its own book of
tricks. (Of course, that leads to other problems...recognizing formations
(neural networks?), building formations, etc.)
> STRATEGY
>
> Strategy is a little harder to pin down. I would say this is the overall
> flow of your game. Another way of putting it is strategy is "A player's
> guess at what set of tactics will work to defeat the opponent" How am I
> going to win the game? Some are conservative and usually work fairly
> well, some are risky, but can really hammer an unsuspecting opponent.
>
> (Deep Blue analogy deleted)
>
> After your AI decides on a certain strategy, then you have to be able to
> implement it. This would be done by deploying a number of tactics that
> make up this strategy, but which ones? and when?
Truly the toughest part (IMO). Third Reich has been roundly criticized
as having either no grand strategy or simply playing along textbook strategy
lines. The first leads to an essentially defensive position while the
second can get predictable quick.
> A truly successful AI would be able to evaluate its opponent's tactics,
> come up with an overall strategy, and translate that strategy into tactics
> of its own. A game that can do all this would be incredible.
'Tis true.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Hired Gun, Gameware & AI ____/ \___/ |
| Wyrd Wyrks Consulting <____/\_---\_\ "Ferretman" |
| Phone: 719-392-4746 |
| E-mail: swoodcoc@concentric.net |
| Web:
http://www.concentric.net/~swoodcoc/ai.html (Dedicated to Game AI) |
| Disclaimer: Yeah, I work for Lockheed-Martin Real3D....you think |
| anybody there ever listens to *my* opinion? Get *serious*. |
+=============================================================================+
From Swoodcoc@concentric.net Mon Jul 14 11:02:27 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id LAA24554; Mon, 14 Jul 1997 11:02:26 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id LAA00208; Mon, 14 Jul 1997 11:02:23 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id LAA18250
for woodcock@real3d.com; Mon, 14 Jul 1997 11:01:25 -0400 (EDT)
Resent-Date: Mon, 14 Jul 1997 11:01:25 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: Swoodcoc@concentric.net
Message-Id: <199707141500.LAA13269@galileo.cris.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Mon, 14 Jul 1997 11:00:05 -0400 (EDT)
In-Reply-To: <199707112235.RAA11449@smtp.gte.net> from "Steve Schonberger" at Jul 11, 97 03:31:33 pm
X-Mailer: ELM [version 2.4 PL25]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"gYTL7D.A.uWE.W7jyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/328
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2632
Status: RO
> This is a point Steven Woodcock brought up. One answer is to have a "who
> are you?" page as part of the startup, so that the computer doesn't
> instantly kill player two because player one has worked up to a high skill
> level. That's annoying, but fixes the problem. An "I'm the only one who
> plays this game on this computer" option could be used to avoid asking the
> problem unnecessarily.
I'm not sure we ever had a good solution offered either, beyond the
two you mention above. I'm not entirely convinced that an AI that
has been "trained" to be my opponent would, necessarily, make a good
opponent for somebody else....play styles influence that quite a bit.
On the other hand, when we're talking about AI modifications along the
lines of decreasing AI reaction times that maakes perfect sense....if I've
played 100 games of Doom and you've played 4, I *ought* to know my way
around that keyboard better.
> So, new topic (please change the subject line if you run with it): What's
> the best way to manage keeping track of user settings (preferences,
> difficulty, etc.) on a computer shared by more than one user?
I like the simple data file tagged by user name.
> I think this was another topic in the AI round-table. Do we make the AI
> enemies play by exactly the same rules and information a player uses? (In
> chess, of course, other games, maybe.) Or do we make up for weaknesses in
> the enemies' AI by giving them "hints from God"?
>
> In short, does computer-controlled player cheat or play fair?
I've seen a trend to try to make the AIs play exactly the same way,
with the same rules, as the player. I know that Enemy Nations does this,
and most of the upcoming RTS games are promising the same.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Hired Gun, Gameware & AI ____/ \___/ |
| Wyrd Wyrks Consulting <____/\_---\_\ "Ferretman" |
| Phone: 719-392-4746 |
| E-mail: swoodcoc@concentric.net |
| Web:
http://www.concentric.net/~swoodcoc/ai.html (Dedicated to Game AI) |
| Disclaimer: Yeah, I work for Lockheed-Martin Real3D....you think |
| anybody there ever listens to *my* opinion? Get *serious*. |
+=============================================================================+
From t-pauln@microsoft.com Mon Jul 14 14:46:25 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id OAA03879; Mon, 14 Jul 1997 14:46:24 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id OAA01587; Mon, 14 Jul 1997 14:45:48 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id OAA19034
for woodcock@real3d.com; Mon, 14 Jul 1997 14:43:39 -0400 (EDT)
Resent-Date: Mon, 14 Jul 1997 14:43:39 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <4152F7B641AFCF11A49800805F680B3F3F094D@RED-36-MSG.dns.microsoft.com>
From: Paul Nash
To: "'gamedesign@mail.digiweb.com'"
Subject: RE: Game AI
Date: Mon, 14 Jul 1997 11:42:42 -0700
X-Priority: 3
X-Mailer: Internet Mail Service (5.0.1458.49)
Resent-Message-ID: <"ddGQN.A.ZjE.7Lnyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/329
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 4858
Status: RO
More inline...
-Paul R. Nash, Multimedia Developer At Large
Microsoft Multimedia Dev. Intern
http://www.uiuc.edu/ph/www/pr-nash/
(As usual, I'm not speaking for Microsoft -- these ideas and thoughts
are mine alone.)
> -----Original Message-----
> From: woodcock@real3d.com [SMTP:woodcock@real3d.com]
> Sent: Friday, July 11, 1997 4:05 PM
> To: gamedesign@mail.digiweb.com
> Subject: Re: Game AI
>
> > > On Fri, 11 Jul 1997, Steve Schonberger wrote:
> > >
> > hard for the AI to save it's brain to disk? Not in a properly
> designed
> > AI, I say. For instance, genetic algortihms should allow the saving
> of
> > the genes to prescribe a persitent state, I would think. So save
> that
> > for each player or each saved game.
>
> In fact GAs were one of the methods of building a learning AI
> discussed in the sessions, mostly with regards to the Creatures game
> and its A-Life techniques.
>
> One side aspect of this that somebody brought up (it might have
> been me, but I'm not sure) is that by having the AI "brain" loaded
> rather than coded you can release add-on expansion packs for the game
> containing additional AIs. Better yet (separate topic), if the AI
> code is accessible to the player (ala Quake-C), then you can sponsor
> contests amongst players to see who can develop the most devious
> AIs. The best 10 get released on a CD, or posted to a web site.
>
[Paul Nash]
Those are some very cool ideas indeed -- we have a contest here
(UIUC) sponsored by ACM every year where they get teams of people
together and give them shell code for a mech type game. The teams then
have like 24 hours of lab time to plugin the best AI they can come up
with, and then the mechs compete to the death in a tournament. It's a
cool idea, though I think it'd be better if the AI's were developed over
longer periods (though cheating is then a big factor).
> > >
> > > (Ryan says games should throttle back to match the players' level)
> > >
> > right? I don't consider this to be like weightlifting, because it's
> the
> > human brain, which is an entirely different kind of muscle. :)
>
> I don't know about that, Paul. I can see value in adjusting the AI
> so that it's always a bit tougher than the player is. I remember the
> first few times I played C&C I was sweating bullets over the AI, but
> once my expertise got high enough he ceased to become a threat in
> all but the most unbalanced scenarios.
>
> The trick is capturing parameters which you can use to accurately
> judge how experienced a player is.
>
[Paul Nash]
Agreed, the parameters are important. However, what I was
disagreeing with specifically was that Ryan seemed to be suggesting that
a player will always level out at some maximum potential, and I'm not
sure I believe that. However, either way an adjusting AI should be able
to compensate for that. (Perhaps change its tactics to force the player
to do something different -- maybe the AI has multiple game strategies
that it can go between???)
> > That said, the AI should definitely *track* the player, and not
>
> Er....isn't that what Ryan said?
>
[Paul Nash]
Yeah, more or less. :) Like I said, that's not necessarily what
I was objecting to, rather the concept of designing for a finite player
capabilities limit.
> > If you
> > could design an AI that can detect specific gameplay deficiencies in
> the
> > human and somehow adapt to them, that would be very cool.
>
> Genetic algorithsm have a possibility here, provided the game itself
> is "big enough" to permit them time to evolve (they can be notoriously
> slow). Consider a space strategy game ala MOO, in which individual
> ship designs are modified by the AI over time based on those ships
> which do well against the player. If you the player tend to build
> lots of fighters and carriers, then gradually over time the AI will
> adapt to that by building ships based on surviving ship types
> (those that have more anti-fighter defenses).
>
[Paul Nash]
I'd be interested to know if any of those ideas are embodied in
AOE type games insofar as "building" people. What if people got smarter
as their civilization advanced, or learned from previous battles. I
guess Close Combat has some complex soldier AI, but I'd like to see it a
little more subtle -- sort of a developing collective consciousness or
something instead of "Joe Sixpack is scared and he has three rounds
left." That tends to promote micro-management because you know exactly
what everyone is doing and feeling and thus feel obligated to correct it
all.
> > Of course, I am not suggesting that any of this is really easy,
> > but all of it has potential.
>
> That's why they pay us the big bucks! (Well, it's rumored some
> folks
> get big bucks...I wouldn't know..... ;).
>
[Paul Nash]
Hehe. Something like that. :-)
From DPottinger@Ensemble-Studios.com Mon Jul 14 21:12:06 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id VAA20377; Mon, 14 Jul 1997 21:12:05 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id VAA02834; Mon, 14 Jul 1997 21:11:29 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id VAA05482
for woodcock@real3d.com; Mon, 14 Jul 1997 21:10:33 -0400 (EDT)
Resent-Date: Mon, 14 Jul 1997 21:10:33 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <802B50C269DECF11B6A200A0242979EF33CFBA@consulting.ensemble.net>
From: David Pottinger
To: gamedesign@mail.digiweb.com
Subject: RE: Game AI
Date: Mon, 14 Jul 1997 20:12:04 -0500
X-Priority: 3
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.0.1457.3)
Content-Type: text/plain
Resent-Message-ID: <"oZHN1D.A.uOB.e2syz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/330
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2508
Status: RO
> -----Original Message-----
> From: Nick Shaffner [SMTP:nshaf@intur.net]
> Sent: Friday, July 11, 1997 4:35 PM
> To: gamedesign@mail.digiweb.com
> Subject: Re: Game AI
>
> Hey David, remember me? :)
> [] Yup! We're still looking for good people like yourself:)
>
> > You do have to consider the game (as Ryan Drake already mentioned).
> No
> > AI should ever make the game unfun for people to play. Does that
> mean
> > that you shouldn't do learning AI then? No way! There is a lot of
> room
> > for learning AIs to be applied in ways that still make games fun.
>
> Agreed, in addition, one could design the AI with the primary
> purpose
> of making the game fun, rather than victory. This certainly be a more
> difficult task, but (for example) by assessing the users levels of
> interaction with the game, the AI could forcibly create incidents to
> break up 'slow' periods - or perhaps not attack the users when he's
> down
> or running low on resources, etc...
> [] 100% agreement. I'd guess (maybe hope is a better word?:) that
> it's not too long (within the next two years) before we see game AIs
> coming out that can regularly begin to whoop up on everyone w/o
> cheating. At that point, I think the next phase will be to make the
> AIs more full-featured (as opposed to just making them win) like
> you're talking here...
>
> A thought to ponder: How would you determine what a slow period is?
> Is it just the simple calculation of how much interaction the human
> has had with the AI? What about using "cheating" to determine where
> the human is? Is that okay?
>
> > the time to code against. Learning is a great way to help alleviate
> > that problem and thus create a better playing experience.
>
> Agreed, so long as it is used properly - it could possibly be
> used to
> help plug unforseen gaps in game mechanics (perhaps like the sandbag
> thing in C&C) ... It can also help make the game more user extensible,
> for example in Mission to Nexus Prime (plug, plug) - the user can
> actually design/create completely new types of units, and learning is
> essential in order to get the AI to the point where it can use them
> effectively...
> [] Yup, since the trend is to make more and more pieces of games
> open-ended, I think it will place more of an emphasis on the game
> being able to handle unforeseen things.
>
> Nick Shaffner
>
http://www.digifx.net/
>
>
>
> dave
>
> Dave C. Pottinger
> Engine Lead and AI Guy
> Ensemble Studios, Inc.
>
>
From DPottinger@Ensemble-Studios.com Mon Jul 14 21:20:23 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id VAA20554; Mon, 14 Jul 1997 21:20:22 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id VAA02865; Mon, 14 Jul 1997 21:20:21 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id VAA07758
for woodcock@real3d.com; Mon, 14 Jul 1997 21:19:25 -0400 (EDT)
Resent-Date: Mon, 14 Jul 1997 21:19:25 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <802B50C269DECF11B6A200A0242979EF33CFBB@consulting.ensemble.net>
From: David Pottinger
To: "'gamedesign@mail.digiweb.com'"
Subject: RE: Game AI
Date: Mon, 14 Jul 1997 20:21:39 -0500
X-Priority: 3
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.0.1457.3)
Content-Type: text/plain
Resent-Message-ID: <"PwsMLD.A.s0B.c_syz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/331
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2119
Status: RO
> -----Original Message-----
> From: woodcock@real3d.com [SMTP:woodcock@real3d.com]
> Sent: Friday, July 11, 1997 5:49 PM
> To: gamedesign@mail.digiweb.com
> Subject: Re: Game AI
>
> [] [snip]
>
> More on subject, I'm surprised that you actually found strategies to
> be similar enough from game to game to make saving such information
> useful. If I replay a given scenario (and I'll admit it has to
> be pretty compelling for me to do so) I usually try something
> different
> than I did the time before.
> [] The scenario replay learning feature was actually created out of a
> desire to have people who didn't win the first time get a different
> play experience when they replayed. The goal here was to remove the
> need to just optimize your strategy well enough so that you can
> eventually beat the scenario with the same thing you tried to do the
> last five times. If the AI does something markedly different (yet
> still intelligent, etc.) each time you play, then you get a more
> enjoyable experience, I think. It does help out replaying scenarios
> that you've already beaten, too (that just wasn't the genesis of the
> idea).
>
> Your solution of saving general playing tendencies (which I presume
> are things like types of units the player likes to build, battle
> formations they prefer, etc.) for randomly generated games seems very
> clever. The AI in C&C, for example (just to pick on the game we
> picked on in the roundtable discussions) seems to be especially tuned
> for the "canned" scenarios and flails somewhat when presented with
> a new user-designed map.
> [] It's primarily which types of units they like to build along with
> a few other things. AOE is very much a rock-paper-scissors game
> (infantry slaughter archers, but cavalary rocks infantry, etc.), so
> concentrating on the contextual unit prefs of players is what provided
> the most useful info and conveniently takes up very little memory:).
> We did try a lot of other things, though:).
>
> [] [snip]
>
> Steve
>
> dave
>
> Dave C. Pottinger
> Engine Lead and AI Guy
> Ensemble Studios, Inc.
>
>
From stevesch@csealumni.UNL.edu Tue Jul 15 00:29:24 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id AAA24281; Tue, 15 Jul 1997 00:29:23 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id AAA03298; Tue, 15 Jul 1997 00:28:46 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id AAA25823
for woodcock@real3d.com; Tue, 15 Jul 1997 00:27:46 -0400 (EDT)
Resent-Date: Tue, 15 Jul 1997 00:27:46 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707150427.XAA21914@smtp.gte.net>
From: "Steve Schonberger"
To:
Subject: Re: Game AI
Date: Mon, 14 Jul 1997 21:23:23 -0700
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1161
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"GwJ3gB.A.pMG.rvvyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/332
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 5891
Status: RO
> From: Paul Nash
> Date: Monday, July 14, 1997 11:42 AM
[...]
> > One side aspect of this that somebody brought up (it might have
> > been me, but I'm not sure) is that by having the AI "brain" loaded
> > rather than coded you can release add-on expansion packs for the game
> > containing additional AIs. Better yet (separate topic), if the AI
> > code is accessible to the player (ala Quake-C), then you can sponsor
> > contests amongst players to see who can develop the most devious
> > AIs. The best 10 get released on a CD, or posted to a web site.
> >
> [Paul Nash]
> Those are some very cool ideas indeed -- we have a contest here
> (UIUC) sponsored by ACM every year where they get teams of people
> together and give them shell code for a mech type game. The teams then
> have like 24 hours of lab time to plugin the best AI they can come up
> with, and then the mechs compete to the death in a tournament. It's a
> cool idea, though I think it'd be better if the AI's were developed over
> longer periods (though cheating is then a big factor).
That reminds me of the old Apple 2 game "Robot Wars". Each player selected
rules for the robots to fight by, then sent them into the battle arena,
where the computer controlled them all, according to the player-defined
rules. The game also came with a bunch of sets of pre-written rules, so
that solo players had someone to play against. Cool stuff.
In modern usage, it would cool for a multi-user game to treat "bots" as an
approved part of the game, rather than as a cheat. Come up with a good
"bot", and maybe the publisher will add it to the game as a monster and add
the designer to the credits. Of course, it takes a different kind of game
design, if human players are to compete against human player with "bot"
helpers.
[...]
> > The trick is capturing parameters which you can use to accurately
> > judge how experienced a player is.
> >
> [Paul Nash]
> Agreed, the parameters are important. However, what I was
> disagreeing with specifically was that Ryan seemed to be suggesting that
> a player will always level out at some maximum potential, and I'm not
> sure I believe that. However, either way an adjusting AI should be able
> to compensate for that. (Perhaps change its tactics to force the player
> to do something different -- maybe the AI has multiple game strategies
> that it can go between???)
> > > That said, the AI should definitely *track* the player, and not
> >
> > Er....isn't that what Ryan said?
> >
> [Paul Nash]
> Yeah, more or less. :) Like I said, that's not necessarily what
> I was objecting to, rather the concept of designing for a finite player
> capabilities limit.
Aside from adventure and puzzle games, you shouln't be able to "solve" a
game, in the sense of coming to a point of not being able to improve any
further. Sure, there comes a point where your reaction time doesn't get
any better, and your knowledge of the levels doesn't meaningfully improve,
but a game should have depth where even an expert can learn new tricks.
That should be the case in any game that doesn't have a real beginning and
end, or at least the space to keep advancing should be large enough that
hardcore players are still amused by the time the sequal is done.
> > > If you could design an AI that can detect specific
> > > gameplay deficiencies in the human and somehow adapt to
> > > them, that would be very cool.
I'm not convinced that would be so cool, at least not from the "fun" point
of view. It's obviously very cool from the technology point of view!
[...]
> [Paul Nash]
> I'd be interested to know if any of those ideas are embodied in
> AOE type games insofar as "building" people. What if people got smarter
> as their civilization advanced, or learned from previous battles. I
> guess Close Combat has some complex soldier AI, but I'd like to see it a
> little more subtle -- sort of a developing collective consciousness or
> something instead of "Joe Sixpack is scared and he has three rounds
> left." That tends to promote micro-management because you know exactly
> what everyone is doing and feeling and thus feel obligated to correct it
> all.
I still think that the pathological cases of learning forbid using learning
in a game, aside from tuning a few behavior parameters within the framework
of a bunch of pre-written rules that are known to produce a challenging
opponent. Full learning is too likely to produce the phenomenom of leaving
the land attack route completely unguarded if a player attacks ten times
from the sea (to use someone else's example). I think reasonable future
technology limits us to presenting the appearance of learning, rather than
trying to use real learning.
Does anyone remember the "Trillion Credit Squadron" game for the
_Traveller_ game system? The idea of the game was to build a fleet of
space warships, and fight them against other players' fleets. The players
were human, but because of how the rules worked, it turned out that optimal
play for that game consisted of a strategy that produced very boring games,
in part because there was a very specific strategy that always defeated any
other strategy, and in part because that strategy itself wasn't very much
fun. Finding that optimal strategy was kind of fun, in a math-puzzle sort
of way, but once it was found there was no point to playing the game again.
If learning produces boring play, don't learn!
Getting around the sandbag C&C strategy would be something that learning
might be able to do, but pre-written rules could do it too, if play-testing
found out how that strategy broke the game. A nice rule-based solution
would be for the computer to fall for that strategy for 3 games (or
whatever), then start using some pre-written counter-strategy.
Steve Schonberger
From jjudd@matcom.com.au Tue Jul 15 01:35:43 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id BAA25577; Tue, 15 Jul 1997 01:35:42 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id BAA03389; Tue, 15 Jul 1997 01:35:06 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id BAA09171
for woodcock@real3d.com; Tue, 15 Jul 1997 01:33:10 -0400 (EDT)
Resent-Date: Tue, 15 Jul 1997 01:33:10 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID:
From: "Judd, John"
To: "'gamedesign@mail.digiweb.com'"
Subject: RE: Game AI
Date: Tue, 15 Jul 1997 15:00:26 +0930
X-Mailer: Microsoft Exchange Server Internet Mail Connector Version 4.0.995.52
Encoding: 102 TEXT
Resent-Message-ID: <"o63-uB.A.yKC.hswyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/333
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 4484
Status: RO
>----------
>From: Steve Schonberger[SMTP:stevesch@csealumni.UNL.edu]
>Sent: Tuesday, 15 July 1997 13:53
>To: gamedesign@mail.digiweb.com
>Subject: Re: Game AI
>
>> From: Paul Nash
>> Date: Monday, July 14, 1997 11:42 AM
>[...]
...snip
>
>Aside from adventure and puzzle games, you shouln't be able to "solve" a
>game, in the sense of coming to a point of not being able to improve any
>further. Sure, there comes a point where your reaction time doesn't get
>any better, and your knowledge of the levels doesn't meaningfully improve,
>but a game should have depth where even an expert can learn new tricks.
>That should be the case in any game that doesn't have a real beginning and
>end, or at least the space to keep advancing should be large enough that
>hardcore players are still amused by the time the sequal is done.
In complete agreement
>> > > If you could design an AI that can detect specific
>> > > gameplay deficiencies in the human and somehow adapt to
>> > > them, that would be very cool.
>
>I'm not convinced that would be so cool, at least not from the "fun" point
>of view. It's obviously very cool from the technology point of view!
It may not be fun at a lower level of play, but depending on the level
of play that the player has chosen (this should be a hard option ) it
would force the human player to correct his/her deficiencies based on a
increased challenge.
>[...]
>> [Paul Nash]
>> I'd be interested to know if any of those ideas are embodied in
>> AOE type games insofar as "building" people. What if people got smarter
>> as their civilization advanced, or learned from previous battles. I
>> guess Close Combat has some complex soldier AI, but I'd like to see it a
>> little more subtle -- sort of a developing collective consciousness or
>> something instead of "Joe Sixpack is scared and he has three rounds
>> left." That tends to promote micro-management because you know exactly
>> what everyone is doing and feeling and thus feel obligated to correct it
>> all.
>
>I still think that the pathological cases of learning forbid using learning
>in a game, aside from tuning a few behavior parameters within the framework
>of a bunch of pre-written rules that are known to produce a challenging
>opponent. Full learning is too likely to produce the phenomenom of leaving
>the land attack route completely unguarded if a player attacks ten times
>from the sea (to use someone else's example). I think reasonable future
>technology limits us to presenting the appearance of learning, rather than
>trying to use real learning.
>
But arent we trying to produce a game where the computer plays like a
human player? Humans are very predictable, we fall into habits very
easily. I will guarantee that if a human is playing a game where the
computer always attack from the sea then he/she will build sea defences.
I will go so far as saying that that response would apply even in
multiplayer situations. The trick would be in making a computer player
learn but also have some smarts so that if the human player suddenly
changed tack and attacked by land, the AI would realise that the ten
other attacks were a feint. The next time it played, it would learn that
human players attempt feints and other tactics.
...snip
>
>Getting around the sandbag C&C strategy would be something that learning
>might be able to do, but pre-written rules could do it too, if play-testing
>found out how that strategy broke the game. A nice rule-based solution
>would be for the computer to fall for that strategy for 3 games (or
>whatever), then start using some pre-written counter-strategy.
>
> Steve Schonberger
Rule based systems tend towards predictability no matter how complex.
Play them enough times and the player will work out how to beat them
consistently. Perhaps a mix of rule based and learning, so that the
computer starts out like a candidate that has just graduated from a
militaryacademy knowing all the great battles of the past and playing by
those rules. After a few campaigns the graduate starts learning how to
apply changes to the set of preexisting rules, and finally after many
campaigns is able to define his own rules.
Wouldnt this give most human players a bit of a shock? ;-)
BTW. I am not saying that this is easy.
>
>regards
>
>John Judd
>Programmer
>MATCOM INFORMATION TECHNOLOGIES
>tel: +61-8-8231-8188
>fax: +61-8-8231-8266
>email: jjudd@matcom.com.au
>
>
>
From mark_a@cix.compulink.co.uk Tue Jul 15 01:50:23 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id BAA25839; Tue, 15 Jul 1997 01:50:22 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id BAA03418; Tue, 15 Jul 1997 01:49:46 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id BAA12107
for woodcock@real3d.com; Tue, 15 Jul 1997 01:48:49 -0400 (EDT)
Resent-Date: Tue, 15 Jul 1997 01:48:49 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Date: Tue, 15 Jul 97 06:46 BST-1
From: mark_a@cix.compulink.co.uk (Mark Atkinson)
Subject: RE: Game AI
To: gamedesign@mail.digiweb.com
Cc: mark_a@cix.compulink.co.uk
Reply-To: mark_a@cix.compulink.co.uk
Message-Id:
Resent-Message-ID: <"F3Mb1C.A.B4C.p7wyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/334
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2308
Status: RO
In-Reply-To: <4152F7B641AFCF11A49800805F680B3F3F094D@RED-36-MSG.dns.microsoft.-
com>
> Agreed, the parameters are important. However, what I was
> disagreeing with specifically was that Ryan seemed to be suggesting that
> a player will always level out at some maximum potential, and I'm not
> sure I believe that. However, either way an adjusting AI should be able
> to compensate for that. (Perhaps change its tactics to force the player
> to do something different -- maybe the AI has multiple game strategies
> that it can go between???)
This discussion is closely analogous to co-evolution, which has been
studied both in the context of genetic algorithms and evolutionary
biology.
One thing this highlights very clearly is there is no such thing as
a "best" strategy, and indeed it's a mistake to even think in such
1-dimensional terms. Even trivial games like Prisoners Dilemma are
fundamentally non-transient (cf scissors-paper-stone); the fitness of a
given strategy is not an absolute value, but can only be interpreted in
the context of its current competitors.
In a co-evolutionary sim, sometimes if you take a 'very good' strategy
from generation 100 and pit it against a 'poor' strategy from gen 10, the
poor strategy wins, due to the more advanced one having become
over-adapted to its sophisticated opponents (ever beaten a good chess
player with 'Fools Mate' because they didn't expect you to pull something
dumb like that?).
In a conventional system, you look at the game, figure out how you would
play it, then try to encode that in your AI. In a true learning/adaptive
system you add an extra level of indirection - you code something which
can encode something which can play the game. In such a system the
program is generic - the 'AI' is in the data, in the parameters. You
don't design something that's good at the game, but something that could
be, and a way for it to get there. You should only understand how it
plays the game by analysing it afterwards.
-=Mark=-
Mark Atkinson Voice: +44 171 828 6990
Technical Director Fax: +44 171 828 6997
Computer Artworks Ltd. Mail: mark@artworks.co.uk
London, UK. Web:
http://www.artworks.co.uk
From drake@cse.psu.edu Tue Jul 15 01:54:56 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id BAA25911; Tue, 15 Jul 1997 01:54:55 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id BAA03422; Tue, 15 Jul 1997 01:54:54 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id BAA13403
for woodcock@real3d.com; Tue, 15 Jul 1997 01:53:58 -0400 (EDT)
Resent-Date: Tue, 15 Jul 1997 01:53:58 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Date: Tue, 15 Jul 1997 01:53:49 -0400 (EDT)
From: Ryan T Drake
To: gamedesign@mail.digiweb.com
Subject: Re: Game AI
In-Reply-To: <199707150427.XAA21914@smtp.gte.net>
Message-ID:
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Resent-Message-ID: <"wALAG.A.tND.4Axyz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/336
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3368
Status: RO
On Mon, 14 Jul 1997, Steve Schonberger wrote:
> Aside from adventure and puzzle games, you shouln't be able to "solve" a
> game, in the sense of coming to a point of not being able to improve any
> further. Sure, there comes a point where your reaction time doesn't get
> any better, and your knowledge of the levels doesn't meaningfully improve,
> but a game should have depth where even an expert can learn new tricks.
> That should be the case in any game that doesn't have a real beginning and
> end, or at least the space to keep advancing should be large enough that
> hardcore players are still amused by the time the sequal is done.
Yes. definitely. I was referring to the current strand of games when I
said eventually you will become ``as good as you can get.'' But a good
game can give you the depth you are talking about.
OK.. a bit of nostalgia or us all. Remember the old commie game M.U.L.E.?
What a great game! Althought once you get good the comuter AI isnt much
of a challenge but thats ok because it works to make the game FUN. It
plays much like a human would, and thus is not at all very predictable. I
still play M.U.L.E. today, with a C64 emulator (heh turning a pentium into
a c64 is an improvement as far as i'm concerned)
> I still think that the pathological cases of learning forbid using learning
> in a game, aside from tuning a few behavior parameters within the framework
> of a bunch of pre-written rules that are known to produce a challenging
> opponent. Full learning is too likely to produce the phenomenom of leaving
> the land attack route completely unguarded if a player attacks ten times
> from the sea (to use someone else's example). I think reasonable future
> technology limits us to presenting the appearance of learning, rather than
> trying to use real learning.
Let's step back and think about a C&C style game for a minute... froma
player's perspective. How much do you actually ``learn'' in a game? I
would argue that you dont learn very much, although you do learn new ways
of reacting to opponent's moves.
For instance. When I am playing a c&c like game, i will immediately send
a scout to locate and/or keep tabs on the other player. For the first 3-5
minutes of the game, I will build generally the same buildings in the same
order (as a side note i believe it is a shortcoming to a game if you have
to build the same things every game to win). Anyway, unless my opponent
is doing something out of the ordinary, I will usually stick with strategy
A. Now, if my opponent is doing something out of the ordinary, then its a
different story. I will say, ``he's building alot of ships'' or ``he's
building alot of aircraft'' and this causes me to adjust my strategy. I
dont think it would be very tough to program these kind of rules into an
AI. I.e. if opponent is building ships at this rate, start building
ships. if opponent is building plains at this rate start building
anti-aircraft guns. Top-view wargames are very reaction-oriented games,
and this should be part of the ai for them. Imagine having to play C&C
but you could not know where and how many units your opponent has. You
would feel like the dumb computer player ;-)
Sorry if this didnt make any sense. its 2:00AM.
= Ryan Drake = drake@cse.psu.edu =
http://www.cse.psu.edu/~drake
From woodcock@real3d.com Tue Jul 15 11:11:06 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id LAA11004; Tue, 15 Jul 1997 11:11:05 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id LAA04857; Tue, 15 Jul 1997 11:11:04 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id LAA25021
for woodcock@real3d.com; Tue, 15 Jul 1997 11:09:55 -0400 (EDT)
Resent-Date: Tue, 15 Jul 1997 11:09:55 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: woodcock@real3d.com
Message-Id: <9707151507.AA04645@stargazer.real3d.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Tue, 15 Jul 1997 11:07:56 -0400 (EDT)
In-Reply-To: <199707112328.SAA24642@smtp.gte.net> from "Steve Schonberger" at Jul 11, 97 04:27:11 pm
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"wF8jvC.A.u5F.lI5yz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/340
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3872
Status: RO
> From: stevesch@csealumni.UNL.edu
>
> > From: woodcock@real3d.com
> > Date: Friday, July 11, 1997 3:48 PM
> >
> > Agreed. Adaptation is one of the things promised in the large
> > online games (such as Ultima Online) and seems to me to be a natural
> > next step for AI in games.
>
> For a game like Ultima Online, adaptation doesn't have to be done by
> "learning". It can be done by the game operators noticing that the
> werewolves always kill human players lacking silver weapons, and always die
> against silver-armed humans, and either pulling werewolves out of the game
> or resetting their programming (AI, attributes, and where they show up).
> In other words, adaptation can be accomplished by just changing something
> that's broken on the server, so as soon as the operators see something
> wrong, they can make the server "adapt". A stand-alone game doesn't have
> that option, except by offering updates on the publisher's web site.
I think I disagree on this, or perhaps didn't make my case clearly.
If you read the various designer's interviews regarding Ultima
Online, you'll find that one thing they *don't* want to do is have
game moderators continually "watching" the game. They will, of course,
have people playing, but more to guide adventures, help lost players,
and generally keep the world "fresh". The UO folks believe that the
AI they've built into the game WILL adapt itself to the environment such
that they won't *need* to (as in your example) pull werewolves out of
the game. The UO folks believe that, should werewolves threaten an
area, word will circulate throughout the kingdom of the problem and
some players will head there naturally. When they stop at Ye Olde
Friendly Inne to ask directions and such, Joe the barkeep (an AI)
will mention that Fred the Smithy is selling silver swords. Etc.
At the CGDC I found that several of the upcoming online RPGs
are attempting to do something similar. The sheer size of these games,
with tens of thousands of players, precludes any necessarily small group
of game moderators from riding herd on everybody (unlike on a MUD).
I think an adaptive AI for the NPCs, together with the occassional
"divine intervention" from the moderators, is the answer here.
> That suggests a contrast between "smart" AI and AI with "personality".
> "Smart" werewolves would go into towns in human form and find another town
> if anyone had silver (possibly testing that by sending in a "dumb"
> werewolf), but declare dinner time if no one had silver. But that would
> just massacre such towns, which isn't fun. A werewolf with "personality"
> might seek out towns with lots of silver weapons, either for the challenge,
> or seeking release from their curse. Which is more fun?
Both are fun. Both should happen in a good RPG.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/software.html (AI Software page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From woodcock@real3d.com Tue Jul 15 11:29:40 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id LAA11751; Tue, 15 Jul 1997 11:29:39 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id LAA04964; Tue, 15 Jul 1997 11:29:37 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id LAA00476
for woodcock@real3d.com; Tue, 15 Jul 1997 11:28:39 -0400 (EDT)
Resent-Date: Tue, 15 Jul 1997 11:28:39 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: woodcock@real3d.com
Message-Id: <9707151527.AA04690@stargazer.real3d.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Tue, 15 Jul 1997 11:27:28 -0400 (EDT)
In-Reply-To: <199707121723.NAA11689@mail.digiweb.com> from "John Vanderbeck" at Jul 12, 97 11:31:07 am
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"wdqGLD.A.ETH.Ab5yz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/341
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 1279
Status: O
> These "Genetic Algorithims" sound very interesting. Where can I find out
> more about them?
There are several excellent pages out there on them, but the best is
probably Nova Genetica:
http://www.aracnet.com/~wwir/j&p.html.
There are also quite a few books that cover the topic; check your local
bookstore.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From woodcock@real3d.com Tue Jul 15 11:39:13 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id LAA12110; Tue, 15 Jul 1997 11:39:12 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id LAA05016; Tue, 15 Jul 1997 11:39:10 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id LAA03224
for woodcock@real3d.com; Tue, 15 Jul 1997 11:38:12 -0400 (EDT)
Resent-Date: Tue, 15 Jul 1997 11:38:12 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: woodcock@real3d.com
Message-Id: <9707151536.AA04709@stargazer.real3d.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Tue, 15 Jul 1997 11:36:50 -0400 (EDT)
In-Reply-To: <4152F7B641AFCF11A49800805F680B3F3F094D@RED-36-MSG.dns.microsoft.com> from "Paul Nash" at Jul 14, 97 11:42:42 am
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"s2SdSB.A.Sp.rj5yz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/342
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2450
Status: O
> I'd be interested to know if any of those ideas are embodied in
> AOE type games insofar as "building" people. What if people got smarter
> as their civilization advanced, or learned from previous battles. I
> guess Close Combat has some complex soldier AI, but I'd like to see it a
> little more subtle -- sort of a developing collective consciousness or
> something instead of "Joe Sixpack is scared and he has three rounds
> left." That tends to promote micro-management because you know exactly
> what everyone is doing and feeling and thus feel obligated to correct it
> all.
Not so far as I know. The minions in Dungeon Keeper have some
invidual characteristics, but I haven't really seen how that greatly
influences the game. Most games don't deal much with the individual
pieces, of course, they deal with groups.
An exception would be the A-life genre of games that's beginning to
develop...Creatures, Fin-Fin, Dogz and Catz, etc. With those you do
interact with individuals (or small groups of individuals), each of which
definitely learns and adapts in its own way to the events around it.
But those are less games than they are ant-farms...albiet very ENGAGING
ant-farms.
Interestingly enough, the Creatures folks were at CGDC and did mention
that they were working on a new C&C-style game that used the Creatures
learning AI technology. *That* could be interesting....
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/software.html (AI Software page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From t-pauln@microsoft.com Tue Jul 15 14:51:03 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id OAA19322; Tue, 15 Jul 1997 14:51:02 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id OAA05745; Tue, 15 Jul 1997 14:50:55 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id OAA11437
for woodcock@real3d.com; Tue, 15 Jul 1997 14:49:53 -0400 (EDT)
Resent-Date: Tue, 15 Jul 1997 14:49:53 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <4152F7B641AFCF11A49800805F680B3F3F095C@RED-36-MSG.dns.microsoft.com>
From: Paul Nash
To: "'gamedesign@mail.digiweb.com'"
Subject: RE: Game AI
Date: Tue, 15 Jul 1997 11:47:13 -0700
X-Priority: 3
X-Mailer: Internet Mail Service (5.0.1458.49)
Resent-Message-ID: <"tfyDEB.A.NlC.PW8yz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/347
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 1503
Status: RO
Regarding the part below, I want to add a little calrity to my original
comment. At that time I was thinking more about a computer player
adapting to not take advantage of your weaknesses as much so that a
particular weakness won't get you killed all the time. To extend my
original Mavis Beacon metaphor, perhaps your trusted sage or military
advisor could "help" you realize your deficiencies and improve upon
them. Anyways, my point was that the AI should be very flexible so as to
not only adapt to your strengths with more competition, but to your
weaknesses with less (if it is to always be a "fair" competitor). This
doesn't mean handicapping the AI with the same weaknesses, just perhaps
slowing the effect of an AI that blatantly leverages your weaknesses --
the amount of leveraging could scale with AI difficulty settings.
-Paul R. Nash, Multimedia Developer At Large
Microsoft Multimedia Dev. Intern
http://www.uiuc.edu/ph/www/pr-nash/
(My comments, not Microsoft's views)
> -----Original Message-----
> From: Steve Schonberger [SMTP:stevesch@csealumni.UNL.edu]
> Sent: Monday, July 14, 1997 9:23 PM
> To: gamedesign@mail.digiweb.com
> Subject: Re: Game AI
>
>
> > > > If you could design an AI that can detect specific
> > > > gameplay deficiencies in the human and somehow adapt to
> > > > them, that would be very cool.
>
> I'm not convinced that would be so cool, at least not from the "fun"
> point
> of view. It's obviously very cool from the technology point of view!
>
>
From nshaf@intur.net Tue Jul 15 16:04:43 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id QAA22301; Tue, 15 Jul 1997 16:04:43 -0400
Received: from www.intur.net by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id QAA06159; Tue, 15 Jul 1997 16:04:36 -0400
Received: from grim by www.intur.net via ESMTP (940816.SGI.8.6.9/940406.SGI)
for id PAA03174; Tue, 15 Jul 1997 15:06:02 -0500
Message-Id: <199707152006.PAA03174@www.intur.net>
From: "Nick Shaffner"
To:
Subject: Re: Game AI
Date: Tue, 15 Jul 1997 15:03:19 -0500
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1161
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Length: 1253
Status: RO
> > Letting the enemy units pathfind around the exact edge of the known
human
> > units' visible range would be fine with me ( in order to enable some of
my
> > eakiness' algo - which makes the game significantly more fun )
>
> That's an interesting thought, actually, and it would be difficult for
> the human player to detect.
I've got it running in Mission to Nexus Prime, and boy does it make the AI
look a *Lot* smarter than it actually is - having the AI stage
sneak/suprise attacks adds immensely to the gameplay, and also tends to
encourage players to play significantly more defensively.
> everybody stacked up like rush hour traffic in LA. I proposed that one
> could "cheat" and simply "teleport" non-moving peons to their
destination,
> thus avoiding having to build a nasty analysis function of some kind to
> figure out how to cross the bridge. My reasoning was as follows:
I definantly agree with your reasoning here, although I think I would
temporarily disable unit-unit collisions for the trapped units before
teleporting them - just so I didn't accidentally teleport them past some
otherwise impassible barrier.
> Thanks Nick; I'll add this to the Games AI page shortly. Very
interesting.
Nifty :-)
Nick
From dpaulsen@avana.net Wed Jul 16 02:17:42 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id CAA06339; Wed, 16 Jul 1997 02:17:41 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id CAA07690; Wed, 16 Jul 1997 02:17:40 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id CAA23385
for woodcock@real3d.com; Wed, 16 Jul 1997 02:16:42 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 02:16:42 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <01BC918E.028FE6E0@atl496.avana.net>
From: David Paulsen
To: "'Game Design Mailing List'"
Subject: Re: Game AI
Date: Wed, 16 Jul 1997 02:14:35 -0400
Encoding: 53 TEXT
Resent-Message-ID: <"wZtNBB.A.UjF.uaGzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/352
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 1702
Status: RO
On Saturday, July 12, 1997 12:49 AM, Orlando
Llanes[SMTP:ollanes@accesspro.net] wrote:
> I like the AI from X-Com. I'm not AI expert, but the X-Com AI looks to be
a
> bit complicated. For example, if you shoot at the alien, they move out of
> the way after the shots are fired. If you were spotted by them, and then
> you run back for cover, they will look for you. They leave their
> fallen/landed spacecraft and seek buildings from which to shoot from high
> above. Etc.
X-Com Rules!
It's clear that the aliens have various "modes" of behavior. One might
classify some as:
1. Stand by; wait for target of opportunity.
2. Find the humanoid.
3. Attack.
4. Hide.
5. Panic -- freeze.
6. Panic -- run away.
By switching between modes as dictated by health, weapons availability, and
mental state a decent simulation of "real" behavior can emerge. But the
behaviors are hard-wired; no real learning happens.
> Another cool thing about X-Com is that the soldiers don't fire accurately
> on the first mission, they gradually get more accurate.
The aliens too.
This could easily be handled by a simple accuracy percentage calculation.
They start off at (say) .33 accuracy, which increments to near .99
accuracy as they gain experience. The machine simply rolls a die to
determine if it's a hit or miss, then selects the target tile accordingly:
either a human (hit) or something else (miss).
If the projectile intersects anything on its way to its intended
target-tile, so be it. I've had troopers who were DREADFUL shots, whose
"miss" actually hit an alien other than the one aimed at! Cool.
[snip]
David
--
David Paulsen
dpaulsen@msn.com
dpaulsen@phss.com
dpaulsen@avana.net
From rick@polylang.com Wed Jul 16 04:43:55 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id EAA08945; Wed, 16 Jul 1997 04:43:54 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id EAA07933; Wed, 16 Jul 1997 04:43:53 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id EAA20840
for woodcock@real3d.com; Wed, 16 Jul 1997 04:42:56 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 04:42:56 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <3.0.32.19970716093518.0077f7f4@MAILHOST>
X-Sender: rick@MAILHOST
X-Mailer: Windows Eudora Pro Version 3.0 (32)
Date: Wed, 16 Jul 1997 09:36:12 +0100
To: gamedesign@mail.digiweb.com
From: rick cronan
Subject: RE: Game AI
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Resent-Message-ID: <"Ixj0a.A.49E.YkIzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/353
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 1439
Status: RO
At 11:47 15/07/97 -0700, you wrote:
>Regarding the part below, I want to add a little calrity to my original
>comment. At that time I was thinking more about a computer player
>adapting to not take advantage of your weaknesses as much so that a
>particular weakness won't get you killed all the time. To extend my
>original Mavis Beacon metaphor, perhaps your trusted sage or military
>advisor could "help" you realize your deficiencies and improve upon
>them.
I think that this is a superb idea. Many games have built in tutorial
levels - Dungeon Keeper being the most recent example that I have spent far
too much time on, but they are always (?) time or event driven snippets of
info.
An assistant that monitored your playing style and offered advice based on
your own deficiencies - or in a network game based on your opponent's
strengths - would be excellent. Having your lieutenant offer advice -
"Sir, I've noticed a pattern to the enemy attacks..." after you got
slaughtered several times in a row might well help keep the frustration
factor at bay.
At the moment I suspect that the processing overheads of such a system
would outway the benefits though.
| rick cronan | email: rick@polylang.com |
| production manager | phone: +44 (0) 114 267 0017 |
| cool beans productions ltd | fax: +44 (0) 114 268 7487 |
| url:
http://www.polylang.com/polylang2/Coolbeans/home.htm |
From r.blum@advertainment.com Wed Jul 16 05:12:24 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id FAA09496; Wed, 16 Jul 1997 05:12:23 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id FAA07978; Wed, 16 Jul 1997 05:12:21 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id FAA29151
for woodcock@real3d.com; Wed, 16 Jul 1997 05:11:24 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 05:11:24 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707160913.LAA00432@hq.seicom.net>
From: "Robert Blum"
To:
Subject: Re: Game AI
Date: Wed, 16 Jul 1997 11:01:01 +0200
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1157
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"S6Z8.A.gCH.x_Izz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/354
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 5362
Status: RO
Steve Schonberger wrote
[snip]
> That reminds me of the old Apple 2 game "Robot Wars". Each player
selected
> rules for the robots to fight by, then sent them into the battle arena,
[snip]
> In modern usage, it would cool for a multi-user game to treat "bots" as
an
> approved part of the game, rather than as a cheat. Come up with a good
> "bot", and maybe the publisher will add it to the game as a monster and
add
> the designer to the credits. Of course, it takes a different kind of
game
> design, if human players are to compete against human player with "bot"
> helpers.
The problem is gaming is becoming more 'mature'. Most people playing the
newest games don't know how to program a bot. That's the main reason
they're regarded as a cheat. So only programming-literate people can
produce their own bots.
A solution to this could be some kind of visual programming, combined with
preconfigured modules. However this means a lot of effort in a part that is
not directly related to producing a good game.
If you can convince your publisher/producer that it is an important part of
gameplay, fine. But I guess he'll tell you should rather finish your game
:)
[snip]
> > > > If you could design an AI that can detect specific
> > > > gameplay deficiencies in the human and somehow adapt to
> > > > them, that would be very cool.
> I'm not convinced that would be so cool, at least not from the "fun"
point
> of view. It's obviously very cool from the technology point of view!
It can be cool from the fun point too. You 'just' need to take care the
computer does not exploit this knowledge when the player seems to weak.
It's like teaching a board or card game to someone else. As long as you
feel your opponent is not up to your skills, you don't use every mistake he
makes. You probably even tell him about the mistakes, so you get a more
powerful opponent later on.
It's some kind of an improved tutorial, if you could implement this.
But at least you could quite easily restrict the AI to never becoming an
overwhelming opponent, because that's no fun.
> I still think that the pathological cases of learning forbid using
learning
> in a game, aside from tuning a few behavior parameters within the
framework
> of a bunch of pre-written rules that are known to produce a challenging
> opponent. Full learning is too likely to produce the phenomenom of
leaving
> the land attack route completely unguarded if a player attacks ten times
> from the sea (to use someone else's example). I think reasonable future
> technology limits us to presenting the appearance of learning, rather
than
> trying to use real learning.
I rather think it's a question of the implementation. If your AI only makes
the decisions based on learning from the game, it's supposed to fall very
hard. At least for the first three or four times. By then it should have
learned that a longer sea attack always precedes a land attack.
But, as we're talking strategy games, decisions in a battlefield are always
influenced by what you learned at your military academy. The equivalent
thing for games would be supplying some kind of basic knowledge. Either
hardcoded, or acquired during the beta test phase.
I personally prefer the latter: This opens up the possibility for players
to 'train' their games AI from scratch. (And perhaps use it in matches
against AI's of his friends)
> Does anyone remember the "Trillion Credit Squadron" game for the
> _Traveller_ game system? The idea of the game was to build a fleet of
[snip]
> fun. Finding that optimal strategy was kind of fun, in a math-puzzle
sort
> of way, but once it was found there was no point to playing the game
again.
> If learning produces boring play, don't learn!
Not agreed. If the only goal of learning is winning the game, *then* don't
learn. If the goal of learning is producing an interesting opponent for the
player, learning is good.
Let me bring up another example: Magic The Gathering
When you've mastered the basics of the game, you will realize there are
some kinds of decks that will win in the long run against most of the other
decks. But as you play these decks you'll recognize two things:
a) You're bored to death while playing
b) There will be somebody to implement an even more boring technique to
beat this deck
So if it is possible to 'measure' boredom, your learning algorithm could
evaluate boring strategies as bad.
> Getting around the sandbag C&C strategy would be something that learning
> might be able to do, but pre-written rules could do it too, if
play-testing
> found out how that strategy broke the game.
The important point here: *IF* play-testing found out. You'll never be able
to test all strategies... And I guarantee there are some sick minds out
there that will find a strategy your rule based AI can not cope
with and that is completely easy to play.
> A nice rule-based solution
> would be for the computer to fall for that strategy for 3 games (or
> whatever), then start using some pre-written counter-strategy.
Which a player would obviously notice. And use sandbags only for 3 games,
afterwards switching to something different.
The problem is your player is learning too. So the behaviour of your AI
must be highly dynamic, otherwise somebody will find out a scheme to beat
it.
Bye,
Robert Blum
From r.blum@advertainment.com Wed Jul 16 05:25:39 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id FAA09694; Wed, 16 Jul 1997 05:25:39 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id FAA07990; Wed, 16 Jul 1997 05:25:33 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id FAA01424
for woodcock@real3d.com; Wed, 16 Jul 1997 05:24:35 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 05:24:35 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707160927.LAA00852@hq.seicom.net>
From: "Robert Blum"
To:
Subject: Re: Game AI
Date: Wed, 16 Jul 1997 11:24:39 +0200
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1157
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"bGjT3D.A.PU.cMJzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/355
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2530
Status: RO
From: woodcock@real3d.com
> Not so far as I know. The minions in Dungeon Keeper have some
> invidual characteristics, but I haven't really seen how that greatly
> influences the game.
Uh, I've not seen their characteristics influence the game at all. Dungeon
Keeper is my biggest disappointment this year. VGA-style graphics and a
not-so-overwhelming AI. I've been through the first ten levels while
simultaneously talking with somebody else, and I'm a rather bad player when
it comes to realtime :)
> Most games don't deal much with the individual
> pieces, of course, they deal with groups.
Shouldn't that be: They're supposed to deal with groups? I'm still looking
for a wargame supporting a concept like an assistant general commanding
your groups for you. The only one I've seen to date is EARTH2140 AD.
It looks promising, but I've not played more than the first four missions.
> An exception would be the A-life genre of games that's beginning to
> develop...Creatures, Fin-Fin, Dogz and Catz, etc. With those you do
> interact with individuals (or small groups of individuals), each of which
> definitely learns and adapts in its own way to the events around it.
Yes, and this keeps fascinating people who really don't like 'normal'
computer games. It's the feeling you're not anymore sitting in front of a
calculator. I'm currently playing with the concept of an intelligent
advisor. Has anybody else checked out Microsoft Agent? A really great tool
for testing out those concepts.
> But those are less games than they are ant-farms...albiet very ENGAGING
> ant-farms.
Lets say it's a replacement for pets. I had as well Dogz as Creatures
installed on my computer at home, but I never touched them again since my
two real cats came into the house.
> Interestingly enough, the Creatures folks were at CGDC and did mention
> that they were working on a new C&C-style game that used the Creatures
> learning AI technology. *That* could be interesting....
Yep. There was an article about the game in an issue of EDGE, about 6
months ago. Not sure if it was made by the creatures people, but they
claimed it would use ALife technologies. It was called 'Legions of Steel',
I think. It sounded very interesting.
The most impressing claim they made was you could leave a multi player game
at any time, a virtual general would take over for you, and you could
rejoin later on. I know I'll loose a lot of sleep when this one comes
out.:)
Bye,
Robert Blum
(Lead Programmer at Rauser ADVERTAINMENT(tm) GmbH)
From condor@neotechonline.com Wed Jul 16 08:01:19 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id IAA12683; Wed, 16 Jul 1997 08:01:18 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id IAA08234; Wed, 16 Jul 1997 08:01:14 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id IAA01130
for woodcock@real3d.com; Wed, 16 Jul 1997 08:00:14 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 08:00:14 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707161158.HAA00882@mail.digiweb.com>
From: "John Vanderbeck (NeoTECH)"
To:
Subject: Re: Game AI
Date: Wed, 16 Jul 1997 07:07:42 -0500
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 4.71.1008.3
X-MimeOle: Produced By Microsoft MimeOLE Engine V4.71.1008.3
Resent-Message-ID: <"x5WeGC.A.LO.0dLzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/357
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3458
Status: RO
>Shouldn't that be: They're supposed to deal with groups? I'm still looking
>for a wargame supporting a concept like an assistant general commanding
>your groups for you. The only one I've seen to date is EARTH2140 AD.
>It looks promising, but I've not played more than the first four missions.
Our game, NovaStar will have this type of AI. You will give general
commands to the top of the chain and they work they're way down to the
bottom as more specific commands. Like real life.
Thanks,
John Vanderbeck
Lead Programmer - NeoTECH Online
condor@neotechonline.com
http://www.neotechonline.com
GAME DESIGN mailing list:
Email gamedesign-request@digiweb.com , SUBJECT subscribe
-----Original Message-----
From: Robert Blum
To: gamedesign@mail.digiweb.com
Date: Wednesday, July 16, 1997 4:24 AM
Subject: Re: Game AI
>From: woodcock@real3d.com
>> Not so far as I know. The minions in Dungeon Keeper have some
>> invidual characteristics, but I haven't really seen how that greatly
>> influences the game.
>
>Uh, I've not seen their characteristics influence the game at all. Dungeon
>Keeper is my biggest disappointment this year. VGA-style graphics and a
>not-so-overwhelming AI. I've been through the first ten levels while
>simultaneously talking with somebody else, and I'm a rather bad player when
>it comes to realtime :)
>
>> Most games don't deal much with the individual
>> pieces, of course, they deal with groups.
>Shouldn't that be: They're supposed to deal with groups? I'm still looking
>for a wargame supporting a concept like an assistant general commanding
>your groups for you. The only one I've seen to date is EARTH2140 AD.
>It looks promising, but I've not played more than the first four missions.
>
>> An exception would be the A-life genre of games that's beginning to
>> develop...Creatures, Fin-Fin, Dogz and Catz, etc. With those you do
>> interact with individuals (or small groups of individuals), each of which
>> definitely learns and adapts in its own way to the events around it.
>Yes, and this keeps fascinating people who really don't like 'normal'
>computer games. It's the feeling you're not anymore sitting in front of a
>calculator. I'm currently playing with the concept of an intelligent
>advisor. Has anybody else checked out Microsoft Agent? A really great tool
>for testing out those concepts.
>
>> But those are less games than they are ant-farms...albiet very ENGAGING
>> ant-farms.
>Lets say it's a replacement for pets. I had as well Dogz as Creatures
>installed on my computer at home, but I never touched them again since my
>two real cats came into the house.
>> Interestingly enough, the Creatures folks were at CGDC and did mention
>> that they were working on a new C&esngC-style game that used the
Creatures
>> learning AI technology. *That* could be interesting....
>Yep. There was an article about the game in an issue of EDGE, about 6
>months ago. Not sure if it was made by the creatures people, but they
>claimed it would use ALife technologies. It was called 'Legions of Steel',
>I think. It sounded very interesting.
>The most impressing claim they made was you could leave a multi player game
>at any time, a virtual general would take over for you, and you could
>rejoin later on. I know I'll loose a lot of sleep when this one comes
>out.:)
>Bye,
>Robert Blum
> (Lead Programmer at Rauser ADVERTAINMENT(tm) GmbH)
>
>
From woodcock@real3d.com Wed Jul 16 11:14:42 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id LAA20153; Wed, 16 Jul 1997 11:14:41 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id LAA09089; Wed, 16 Jul 1997 11:14:38 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id LAA14146
for woodcock@real3d.com; Wed, 16 Jul 1997 11:13:39 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 11:13:39 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: woodcock@real3d.com
Message-Id: <9707161512.AA05471@stargazer.real3d.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Wed, 16 Jul 1997 11:12:25 -0400 (EDT)
In-Reply-To: <199707160927.LAA00852@hq.seicom.net> from "Robert Blum" at Jul 16, 97 11:24:39 am
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"3maEyC.A.bUD.zSOzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/359
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3405
Status: RO
> From: woodcock@real3d.com
> > Not so far as I know. The minions in Dungeon Keeper have some
> > invidual characteristics, but I haven't really seen how that greatly
> > influences the game.
>
> Uh, I've not seen their characteristics influence the game at all. Dungeon
> Keeper is my biggest disappointment this year. VGA-style graphics and a
> not-so-overwhelming AI. I've been through the first ten levels while
> simultaneously talking with somebody else, and I'm a rather bad player when
> it comes to realtime :)
Yeah, the AI is a bit weak at times. Word on the Net from the developers is
that they tried to "tune" the AI for each specific mission. This has in
turn yielded massive criticism from players who find it smart on one level
and dumb as a brick in the next. Which leads naturally to my question....
Do we *really* want to build AIs that try to match the experience level
of the players? After all, that's what the DK designers did...and they've
caught nothing but flack for it. I submit that, perhaps, we don't have
to worry about an AI "so good it overwhelms the player" (as somebody else
has suggested)...we have to worry about building an AI good enough to give
a good fight, period.
> > Interestingly enough, the Creatures folks were at CGDC and did mention
> > that they were working on a new C&C-style game that used the Creatures
> > learning AI technology. *That* could be interesting....
>
> Yep. There was an article about the game in an issue of EDGE, about 6
> months ago. Not sure if it was made by the creatures people, but they
> claimed it would use ALife technologies. It was called 'Legions of Steel',
> I think. It sounded very interesting.
> The most impressing claim they made was you could leave a multi player game
> at any time, a virtual general would take over for you, and you could
> rejoin later on. I know I'll loose a lot of sleep when this one comes
> out.:)
Me too.
Having my soldiers be smart enough to run away from flamethrowers would
be a great improvement over C&C's AI. Mind you, adaptive ALife technology
is NOT needed for that, and could in fact lead to complications--if I
ORDER the solider to suicide I sorta expect him to do it. That might not
be a realistic outcome for the "real world", but for a "game"....the pawn
doesn't get a vote. I could see some very frustrated players if units
in a C&C game absolutely refuse to obey orders.....
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
From t-pauln@microsoft.com Wed Jul 16 14:48:09 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id OAA28842; Wed, 16 Jul 1997 14:48:08 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id OAA09864; Wed, 16 Jul 1997 14:48:07 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id OAA03246
for woodcock@real3d.com; Wed, 16 Jul 1997 14:47:09 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 14:47:09 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <4152F7B641AFCF11A49800805F680B3F3F0973@RED-36-MSG.dns.microsoft.com>
From: Paul Nash
To: "'gamedesign@mail.digiweb.com'"
Subject: RE: Game AI
Date: Wed, 16 Jul 1997 11:43:20 -0700
X-Priority: 3
X-Mailer: Internet Mail Service (5.0.1458.49)
Resent-Message-ID: <"VhRcjC.A.Ak.pZRzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/361
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 2422
Status: RO
Not that you mention it, StarFox64 does that to a certain extent both in
its training mode and in its first few levels (tapering off as you go
further). However, that game -- while really fun -- is obviously
scripted. The NPC players do the same things each time and you have to
respond correctly or they'll die. That part gets repetetive.
I agree that processing overhead is a significant factor, but also the
ability to intelligently analyze the players moves as part of a coherent
scheme. The design of the game probably does a lot to determine how easy
it is for an AI to figure out what strategies the PC is using.
-Paul R. Nash, Multimedia Developer At Large
Microsoft Multimedia Dev. Intern
http://www.uiuc.edu/ph/www/pr-nash/
> -----Original Message-----
> From: rick cronan [SMTP:rick@polylang.com]
> Sent: Wednesday, July 16, 1997 1:36 AM
> To: gamedesign@mail.digiweb.com
> Subject: RE: Game AI
>
> At 11:47 15/07/97 -0700, you wrote:
> >Regarding the part below, I want to add a little calrity to my
> original
> >comment. At that time I was thinking more about a computer player
> >adapting to not take advantage of your weaknesses as much so that a
> >particular weakness won't get you killed all the time. To extend my
> >original Mavis Beacon metaphor, perhaps your trusted sage or military
> >advisor could "help" you realize your deficiencies and improve upon
> >them.
>
> I think that this is a superb idea. Many games have built in tutorial
> levels - Dungeon Keeper being the most recent example that I have
> spent far
> too much time on, but they are always (?) time or event driven
> snippets of
> info.
>
> An assistant that monitored your playing style and offered advice
> based on
> your own deficiencies - or in a network game based on your opponent's
> strengths - would be excellent. Having your lieutenant offer advice -
> "Sir, I've noticed a pattern to the enemy attacks..." after you got
> slaughtered several times in a row might well help keep the
> frustration
> factor at bay.
>
> At the moment I suspect that the processing overheads of such a system
> would outway the benefits though.
>
>
> | rick cronan | email: rick@polylang.com |
> | production manager | phone: +44 (0) 114 267 0017 |
> | cool beans productions ltd | fax: +44 (0) 114 268 7487 |
> | url:
http://www.polylang.com/polylang2/Coolbeans/home.htm |
From edybs@ix.netcom.com Wed Jul 16 16:00:57 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id QAA02375; Wed, 16 Jul 1997 16:00:56 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id QAA10246; Wed, 16 Jul 1997 16:00:55 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id PAA18071
for woodcock@real3d.com; Wed, 16 Jul 1997 15:59:57 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 15:59:57 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <33CD28F3.156A@ix.netcom.com>
Date: Wed, 16 Jul 1997 14:02:59 -0600
From: Eric Dybsand
Reply-To: edybs@ix.netcom.com
X-Mailer: Mozilla 3.0C-NC320 (Win95; I)
MIME-Version: 1.0
To: gamedesign@mail.digiweb.com
Subject: Re: Game AI
References: <4152F7B641AFCF11A49800805F680B3F3F0973@RED-36-MSG.dns.microsoft.com>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"dIgjVD.A.RXE.RfSzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/362
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3613
Status: RO
PMFIJ, I have been enjoying reading this thread on getting an AIP
to figure out what the human player is doing, but I have a question
to pose to those folks in this discussion.
How does an AIP figure out a feint by the player? In other words,
if a player is setting up an opponent with a diversionary attack
or maneuver, then how can the AIP differeniate those units that
are involved in the actual attack, and those units involved in
the feint or diversion, when both sets of units ultimately are
involved in attacking the AIP?
The classical maneuver that comes to mind, is where a player has
a small group of units fake a frontal assault, to lure defenders
toward the "front" and then at the same time maneuvers a larger
group of units around one or more flanks, and upon successfully
engaging the defenders from the flanks, orders the faked frontal
assault to become a real assault.
Thanks.
Eric
----
Eric Dybsand
http://pw2.netcom.com/~edybs
Glacier Edge Technology email: edybs@ix.netcom.com
Glendale, Colorado, USA
Paul Nash wrote:
>
> Not that you mention it, StarFox64 does that to a certain extent both in
> its training mode and in its first few levels (tapering off as you go
> further). However, that game -- while really fun -- is obviously
> scripted. The NPC players do the same things each time and you have to
> respond correctly or they'll die. That part gets repetetive.
>
> I agree that processing overhead is a significant factor, but also the
> ability to intelligently analyze the players moves as part of a coherent
> scheme. The design of the game probably does a lot to determine how easy
> it is for an AI to figure out what strategies the PC is using.
>
> -Paul R. Nash, Multimedia Developer At Large
> Microsoft Multimedia Dev. Intern
http://www.uiuc.edu/ph/www/pr-nash/
>
> > -----Original Message-----
> > From: rick cronan [SMTP:rick@polylang.com]
> > Sent: Wednesday, July 16, 1997 1:36 AM
> > To: gamedesign@mail.digiweb.com
> > Subject: RE: Game AI
> >
> > At 11:47 15/07/97 -0700, you wrote:
> > >Regarding the part below, I want to add a little calrity to my
> > original
> > >comment. At that time I was thinking more about a computer player
> > >adapting to not take advantage of your weaknesses as much so that a
> > >particular weakness won't get you killed all the time. To extend my
> > >original Mavis Beacon metaphor, perhaps your trusted sage or military
> > >advisor could "help" you realize your deficiencies and improve upon
> > >them.
> >
> > I think that this is a superb idea. Many games have built in tutorial
> > levels - Dungeon Keeper being the most recent example that I have
> > spent far
> > too much time on, but they are always (?) time or event driven
> > snippets of
> > info.
> >
> > An assistant that monitored your playing style and offered advice
> > based on
> > your own deficiencies - or in a network game based on your opponent's
> > strengths - would be excellent. Having your lieutenant offer advice -
> > "Sir, I've noticed a pattern to the enemy attacks..." after you got
> > slaughtered several times in a row might well help keep the
> > frustration
> > factor at bay.
> >
> > At the moment I suspect that the processing overheads of such a system
> > would outway the benefits though.
> >
> >
> > | rick cronan | email: rick@polylang.com |
> > | production manager | phone: +44 (0) 114 267 0017 |
> > | cool beans productions ltd | fax: +44 (0) 114 268 7487 |
> > | url:
http://www.polylang.com/polylang2/Coolbeans/home.htm |
From t-pauln@microsoft.com Wed Jul 16 18:13:07 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id SAA07247; Wed, 16 Jul 1997 18:13:07 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id SAA10820; Wed, 16 Jul 1997 18:13:05 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id SAA11296
for woodcock@real3d.com; Wed, 16 Jul 1997 18:12:08 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 18:12:08 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <4152F7B641AFCF11A49800805F680B3F3F0978@RED-36-MSG.dns.microsoft.com>
From: Paul Nash
To: gamedesign@mail.digiweb.com
Subject: RE: Game AI
Date: Wed, 16 Jul 1997 15:09:53 -0700
X-Priority: 3
X-Mailer: Internet Mail Service (5.0.1458.49)
Resent-Message-ID: <"FEYcRB.A.qpC.7aUzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/363
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 6024
Status: RO
On one of the external Age of Empires pages (
http://www.microsoft.com/games/empires/behind.htm ), Dave Pottinger
talks about the AIP's "playbook." Basically, (correct me where wrong
Dave :) ) the computer knows how to do certain types of strategies that
are common, including frontal assault. So, since AOE is a killer game
(I'm addicted) we can assume that such an approach is workable. The
question I pose then, is why not use a playbook for the AI's defense?
An AIP could analyze the situation on the map and put it through it's
"playbook ranking" function which would try to determine if what's going
on in the playbook looks like anything it knows as a standard military
tactic. If so, the AIP should adapt to defend against the threat. If
it doesn't recognize the possible strategy (PC develops his own new
tactic) then it should start "learning" by keeping track of information
that will let it recognize the strategy next time. This could be
generalized unit positions, types of attacks, etc. But also, the AIP
would want to keep some info about what it did to defend against the
attack and how successful that defense was and what the PC ended up
doing (with the multiple units, for instance).
If I'm not mistaken, this is basically an expert system with learning
capabilities. These are just some ideas of mine, in the context of this
thread and more specifically, a strategy game, which has been the
implied topic of much of this thread. There are obviously some issues to
deal with, like what's important for the AIP to watch and how is the AIP
going to "recognize" situations. Hey -- aren't there projects to build
medical diagnostics machines which do things like this? Perhaps there is
some research material available in that area that could help us develop
the next generation of killer AIP's!
-Paul R. Nash, Multimedia Developer At Large
Microsoft Multimedia Dev. Intern
http://www.uiuc.edu/ph/www/pr-nash/
(As usual, the thoughts above are mine alone, and do not represent the
views of Microsoft Corp.)
> -----Original Message-----
> From: Eric Dybsand [SMTP:edybs@ix.netcom.com]
> Sent: Wednesday, July 16, 1997 1:03 PM
> To: gamedesign@mail.digiweb.com
> Subject: Re: Game AI
>
> PMFIJ, I have been enjoying reading this thread on getting an AIP
> to figure out what the human player is doing, but I have a question
> to pose to those folks in this discussion.
>
> How does an AIP figure out a feint by the player? In other words,
> if a player is setting up an opponent with a diversionary attack
> or maneuver, then how can the AIP differeniate those units that
> are involved in the actual attack, and those units involved in
> the feint or diversion, when both sets of units ultimately are
> involved in attacking the AIP?
>
> The classical maneuver that comes to mind, is where a player has
> a small group of units fake a frontal assault, to lure defenders
> toward the "front" and then at the same time maneuvers a larger
> group of units around one or more flanks, and upon successfully
> engaging the defenders from the flanks, orders the faked frontal
> assault to become a real assault.
>
> Thanks.
>
> Eric
> ----
> Eric Dybsand
http://pw2.netcom.com/~edybs
> Glacier Edge Technology email: edybs@ix.netcom.com
> Glendale, Colorado, USA
>
>
> Paul Nash wrote:
> >
> > Not that you mention it, StarFox64 does that to a certain extent
> both in
> > its training mode and in its first few levels (tapering off as you
> go
> > further). However, that game -- while really fun -- is obviously
> > scripted. The NPC players do the same things each time and you have
> to
> > respond correctly or they'll die. That part gets repetetive.
> >
> > I agree that processing overhead is a significant factor, but also
> the
> > ability to intelligently analyze the players moves as part of a
> coherent
> > scheme. The design of the game probably does a lot to determine how
> easy
> > it is for an AI to figure out what strategies the PC is using.
> >
> > -Paul R. Nash, Multimedia Developer At Large
> > Microsoft Multimedia Dev. Intern
http://www.uiuc.edu/ph/www/pr-nash/
> >
> > > -----Original Message-----
> > > From: rick cronan [SMTP:rick@polylang.com]
> > > Sent: Wednesday, July 16, 1997 1:36 AM
> > > To: gamedesign@mail.digiweb.com
> > > Subject: RE: Game AI
> > >
> > > At 11:47 15/07/97 -0700, you wrote:
> > > >Regarding the part below, I want to add a little calrity to my
> > > original
> > > >comment. At that time I was thinking more about a computer
> player
> > > >adapting to not take advantage of your weaknesses as much so that
> a
> > > >particular weakness won't get you killed all the time. To extend
> my
> > > >original Mavis Beacon metaphor, perhaps your trusted sage or
> military
> > > >advisor could "help" you realize your deficiencies and improve
> upon
> > > >them.
> > >
> > > I think that this is a superb idea. Many games have built in
> tutorial
> > > levels - Dungeon Keeper being the most recent example that I have
> > > spent far
> > > too much time on, but they are always (?) time or event driven
> > > snippets of
> > > info.
> > >
> > > An assistant that monitored your playing style and offered advice
> > > based on
> > > your own deficiencies - or in a network game based on your
> opponent's
> > > strengths - would be excellent. Having your lieutenant offer
> advice -
> > > "Sir, I've noticed a pattern to the enemy attacks..." after you
> got
> > > slaughtered several times in a row might well help keep the
> > > frustration
> > > factor at bay.
> > >
> > > At the moment I suspect that the processing overheads of such a
> system
> > > would outway the benefits though.
> > >
> > >
> > > | rick cronan | email: rick@polylang.com |
> > > | production manager | phone: +44 (0) 114 267 0017 |
> > > | cool beans productions ltd | fax: +44 (0) 114 268 7487 |
> > > | url:
http://www.polylang.com/polylang2/Coolbeans/home.htm |
From DPottinger@Ensemble-Studios.com Wed Jul 16 18:57:03 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id SAA08364; Wed, 16 Jul 1997 18:57:02 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id SAA10938; Wed, 16 Jul 1997 18:57:00 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id SAA18969
for woodcock@real3d.com; Wed, 16 Jul 1997 18:56:02 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 18:56:02 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-ID: <802B50C269DECF11B6A200A0242979EF33CFEA@consulting.ensemble.net>
From: David Pottinger
To: "'gamedesign@mail.digiweb.com'"
Subject: RE: Game AI
Date: Wed, 16 Jul 1997 17:56:55 -0500
X-Priority: 3
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.0.1457.3)
Content-Type: text/plain;
charset="iso-8859-1"
Resent-Message-ID: <"3Ipel.A.IhE.0DVzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/364
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 9039
Status: RO
Paul-
You've nailed AOE's use of the playbook pretty well; that's exactly what
the playbook does. It's essentially a handy way to store the data the
AIP uses to figure out where to move his units, when to attack, where to
feint, etc. That data had to go somewhere, so we put it in the
"Playbook".
FWIW, the playbook was the last military manuever model I did for AOE.
We've kept the other three for variety and level of difficulty sake,
though.
As far as extending the playbook's uses, there are a lot of options, I
think. Just some simple addition to the playbook to add new plays
learned from the HP (no, AOE doesn't do this) would be one cool step.
I've given a little thought to this for future projects; I expect you'd
have to do some variation of (I think this may be reiteration of Paul's
thoughts, but I'll just copy my notes anyway):
* tracking the units you watched (cheating to see the entire map
makes this part a lot easier, of course)
* tracking the types of units
* tracking the target(s) attacked by the units you're considering
to be in the same "play" by the opposing player
* tracking the positions of units along some timeline (at X, UnitA
was here, at X+1, UnitA moved 10 units to the left, etc.)
* integrating the positional and type info into whatever grouping
system your game/engine uses (so that you can have a play with 25 units
and not have to micro-man each one of them)
* determining the quality/successfulness of the play
* running some matching calc to see if you already have this play
or a close variant
If you do all of this and then dump the resulting play into the same
book as the rest, you should then be able to see your AIPs start running
the plays you run against them. The difficulties that would probably
come up are:
* How to determine what units are in the "play" you're watching
and which ones aren't?
* How to calc the real quality of the play when all you have to
measure it by is your AIP's potentially flawed response?
* How to come up with a solid rejection mechanism so you don't
keep adding new plays that are really just non-substantive variants of
others (i.e. a non-retreating, frontal assault is pretty much the same
with axemen as with lasermen, etc.)?
A question of mine: How much is too much? Is there a line of
usefullness that can be drawn between useful learning and overlearning
(in this playbook context)? If you have 10 basic plays, do you really
need anymore? Is it game specific?
dave
Dave C. Pottinger
Engine Lead and AI Guy
Ensemble Studios, Inc.
> -----Original Message-----
> From: Paul Nash [SMTP:t-pauln@microsoft.com]
> Sent: Wednesday, July 16, 1997 5:10 PM
> To: gamedesign@mail.digiweb.com
> Subject: RE: Game AI
>
> On one of the external Age of Empires pages (
>
http://www.microsoft.com/games/empires/behind.htm ), Dave Pottinger
> talks about the AIP's "playbook." Basically, (correct me where wrong
> Dave :) ) the computer knows how to do certain types of strategies
> that
> are common, including frontal assault. So, since AOE is a killer game
> (I'm addicted) we can assume that such an approach is workable. The
> question I pose then, is why not use a playbook for the AI's defense?
> An AIP could analyze the situation on the map and put it through it's
> "playbook ranking" function which would try to determine if what's
> going
> on in the playbook looks like anything it knows as a standard military
> tactic. If so, the AIP should adapt to defend against the threat. If
> it doesn't recognize the possible strategy (PC develops his own new
> tactic) then it should start "learning" by keeping track of
> information
> that will let it recognize the strategy next time. This could be
> generalized unit positions, types of attacks, etc. But also, the AIP
> would want to keep some info about what it did to defend against the
> attack and how successful that defense was and what the PC ended up
> doing (with the multiple units, for instance).
>
> If I'm not mistaken, this is basically an expert system with learning
> capabilities. These are just some ideas of mine, in the context of
> this
> thread and more specifically, a strategy game, which has been the
> implied topic of much of this thread. There are obviously some issues
> to
> deal with, like what's important for the AIP to watch and how is the
> AIP
> going to "recognize" situations. Hey -- aren't there projects to
> build
> medical diagnostics machines which do things like this? Perhaps there
> is
> some research material available in that area that could help us
> develop
> the next generation of killer AIP's!
>
> -Paul R. Nash, Multimedia Developer At Large
> Microsoft Multimedia Dev. Intern
http://www.uiuc.edu/ph/www/pr-nash/
>
> (As usual, the thoughts above are mine alone, and do not represent the
> views of Microsoft Corp.)
>
> > -----Original Message-----
> > From: Eric Dybsand [SMTP:edybs@ix.netcom.com]
> > Sent: Wednesday, July 16, 1997 1:03 PM
> > To: gamedesign@mail.digiweb.com
> > Subject: Re: Game AI
> >
> > PMFIJ, I have been enjoying reading this thread on getting an AIP
> > to figure out what the human player is doing, but I have a question
> > to pose to those folks in this discussion.
> >
> > How does an AIP figure out a feint by the player? In other words,
> > if a player is setting up an opponent with a diversionary attack
> > or maneuver, then how can the AIP differeniate those units that
> > are involved in the actual attack, and those units involved in
> > the feint or diversion, when both sets of units ultimately are
> > involved in attacking the AIP?
> >
> > The classical maneuver that comes to mind, is where a player has
> > a small group of units fake a frontal assault, to lure defenders
> > toward the "front" and then at the same time maneuvers a larger
> > group of units around one or more flanks, and upon successfully
> > engaging the defenders from the flanks, orders the faked frontal
> > assault to become a real assault.
> >
> > Thanks.
> >
> > Eric
> > ----
> > Eric Dybsand
http://pw2.netcom.com/~edybs
> > Glacier Edge Technology email: edybs@ix.netcom.com
> > Glendale, Colorado, USA
> >
> >
> > Paul Nash wrote:
> > >
> > > Not that you mention it, StarFox64 does that to a certain extent
> > both in
> > > its training mode and in its first few levels (tapering off as you
> > go
> > > further). However, that game -- while really fun -- is obviously
> > > scripted. The NPC players do the same things each time and you
> have
> > to
> > > respond correctly or they'll die. That part gets repetetive.
> > >
> > > I agree that processing overhead is a significant factor, but also
> > the
> > > ability to intelligently analyze the players moves as part of a
> > coherent
> > > scheme. The design of the game probably does a lot to determine
> how
> > easy
> > > it is for an AI to figure out what strategies the PC is using.
> > >
> > > -Paul R. Nash, Multimedia Developer At Large
> > > Microsoft Multimedia Dev. Intern
>
http://www.uiuc.edu/ph/www/pr-nash/
> > >
> > > > -----Original Message-----
> > > > From: rick cronan [SMTP:rick@polylang.com]
> > > > Sent: Wednesday, July 16, 1997 1:36 AM
> > > > To: gamedesign@mail.digiweb.com
> > > > Subject: RE: Game AI
> > > >
> > > > At 11:47 15/07/97 -0700, you wrote:
> > > > >Regarding the part below, I want to add a little calrity to my
> > > > original
> > > > >comment. At that time I was thinking more about a computer
> > player
> > > > >adapting to not take advantage of your weaknesses as much so
> that
> > a
> > > > >particular weakness won't get you killed all the time. To
> extend
> > my
> > > > >original Mavis Beacon metaphor, perhaps your trusted sage or
> > military
> > > > >advisor could "help" you realize your deficiencies and improve
> > upon
> > > > >them.
> > > >
> > > > I think that this is a superb idea. Many games have built in
> > tutorial
> > > > levels - Dungeon Keeper being the most recent example that I
> have
> > > > spent far
> > > > too much time on, but they are always (?) time or event driven
> > > > snippets of
> > > > info.
> > > >
> > > > An assistant that monitored your playing style and offered
> advice
> > > > based on
> > > > your own deficiencies - or in a network game based on your
> > opponent's
> > > > strengths - would be excellent. Having your lieutenant offer
> > advice -
> > > > "Sir, I've noticed a pattern to the enemy attacks..." after you
> > got
> > > > slaughtered several times in a row might well help keep the
> > > > frustration
> > > > factor at bay.
> > > >
> > > > At the moment I suspect that the processing overheads of such a
> > system
> > > > would outway the benefits though.
> > > >
> > > >
> > > > | rick cronan | email: rick@polylang.com
> |
> > > > | production manager | phone: +44 (0) 114 267 0017
> |
> > > > | cool beans productions ltd | fax: +44 (0) 114 268 7487
> |
> > > > | url:
http://www.polylang.com/polylang2/Coolbeans/home.htm
> |
From Swoodcoc@concentric.net Wed Jul 16 18:59:01 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id SAA08418; Wed, 16 Jul 1997 18:59:00 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id SAA10950; Wed, 16 Jul 1997 18:58:57 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id SAA19470
for woodcock@real3d.com; Wed, 16 Jul 1997 18:58:00 -0400 (EDT)
Resent-Date: Wed, 16 Jul 1997 18:58:00 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: Swoodcoc@concentric.net
Message-Id: <199707162257.SAA28554@viking.cris.com>
Subject: Re: Game AI and Feinting Attacks
To: gamedesign@mail.digiweb.com
Date: Wed, 16 Jul 1997 18:57:18 -0400 (EDT)
X-Mailer: ELM [version 2.4 PL25]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"ruuOx.A.8pE.cGVzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/365
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3130
Status: RO
> From: Eric Dybsand
>
> How does an AIP figure out a feint by the player? In other words,
> if a player is setting up an opponent with a diversionary attack
> or maneuver, then how can the AIP differeniate those units that
> are involved in the actual attack, and those units involved in
> the feint or diversion, when both sets of units ultimately are
> involved in attacking the AIP?
>
> The classical maneuver that comes to mind, is where a player has
> a small group of units fake a frontal assault, to lure defenders
> toward the "front" and then at the same time maneuvers a larger
> group of units around one or more flanks, and upon successfully
> engaging the defenders from the flanks, orders the faked frontal
> assault to become a real assault.
Good question.
The only thing I can think of off the top of my head is some form
of estimation of forces (what we used to call "counter counting" when
I played board games). That is, the AI should have some general idea of
the type and quantity of units the player ought to have. If we're playing
a Battle of the Bulge game, for example, I know that the Germans have somewhere
between 3 and 8 tank divisions. If playing C&C, I know that a player could
produce, say, 50 infantrymen in roughly a half-hour. If playing Carriers
at War, I know that a carrier *should* have 8 flights of bombers and I'm
only seeing 3. Etc.
Then, should an attack come that looks to be significantly *less* than
these forces, I could as the AI get a bit suspicious. If the player is
attacking me with only 20% of the forces I've estimated he ought to have,
then maybe I should respond with fewer of my mobile reinforcements and step
our recon in areas I'd assumed to be "quiet".
Drawbacks to this approach are numerous and I'm only tossing the idea in
off the top of my head. It's pretty tricky to guestimate how many units
and the mix of units that a player "ought" to have developed by a given point
in a game, particularly in a game like Warcraft II or Enemy Nations where
the player builds forces in place. (It would be somewhat easier in a game
recreating the Battle of the Bulge, I would imagine.)
I'm curious to hear what other methods folks might come up with....
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Hired Gun, Gameware & AI ____/ \___/ |
| Wyrd Wyrks Consulting <____/\_---\_\ "Ferretman" |
| Phone: 719-392-4746 |
| E-mail: swoodcoc@concentric.net |
| Web:
http://www.concentric.net/~swoodcoc/ai.html (Dedicated to Game AI) |
| Disclaimer: Yeah, I work for Lockheed-Martin Real3D....you think |
| anybody there ever listens to *my* opinion? Get *serious*. |
+=============================================================================+
From rick@polylang.com Thu Jul 17 05:38:12 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id FAA24426; Thu, 17 Jul 1997 05:38:12 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id FAA12343; Thu, 17 Jul 1997 05:38:11 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id FAA11232
for woodcock@real3d.com; Thu, 17 Jul 1997 05:37:13 -0400 (EDT)
Resent-Date: Thu, 17 Jul 1997 05:37:13 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <3.0.32.19970717095840.006d96f4@MAILHOST>
X-Sender: rick@MAILHOST
X-Mailer: Windows Eudora Pro Version 3.0 (32)
Date: Thu, 17 Jul 1997 10:31:08 +0100
To: gamedesign@mail.digiweb.com
From: rick cronan
Subject: Re: Game AI
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
X-Info: Messages Limited to 1 Megabyte due to Technical Problems
Resent-Message-ID: <"4NIutC.A.ImC.Aeezz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/369
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 986
Status: RO
At 11:12 16/07/97 -0400, Steven Woodcock wrote:
> Having my soldiers be smart enough to run away from flamethrowers would
>be a great improvement over C&C's AI. Mind you, adaptive ALife technology
>is NOT needed for that, and could in fact lead to complications--if I
>ORDER the solider to suicide I sorta expect him to do it. That might not
>be a realistic outcome for the "real world", but for a "game"....the pawn
>doesn't get a vote. I could see some very frustrated players if units
>in a C&C game absolutely refuse to obey orders.....
Unless morale / loyalty rules were part of the game. Then you'd value your
insanely loyal but somewhat weak units, knowing that you could use them as
sacrificial pieces.
| rick cronan | email: rick@polylang.com |
| production manager | phone: +44 (0) 114 267 0017 |
| cool beans productions ltd | fax: +44 (0) 114 268 7487 |
| url:
http://www.polylang.com/polylang2/Coolbeans/home.htm |
From Swoodcoc@concentric.net Thu Jul 17 11:11:20 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id LAA05707; Thu, 17 Jul 1997 11:11:20 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id LAA13321; Thu, 17 Jul 1997 11:11:16 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id LAA11203
for woodcock@real3d.com; Thu, 17 Jul 1997 11:10:16 -0400 (EDT)
Resent-Date: Thu, 17 Jul 1997 11:10:16 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: Swoodcoc@concentric.net
Message-Id: <199707171509.LAA26760@mariner.cris.com>
Subject: Re: Game AI
To: gamedesign@mail.digiweb.com
Date: Thu, 17 Jul 1997 11:09:06 -0400 (EDT)
In-Reply-To: <802B50C269DECF11B6A200A0242979EF33CFEA@consulting.ensemble.net> from "David Pottinger" at Jul 16, 97 05:56:55 pm
X-Mailer: ELM [version 2.4 PL25]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"_EoPhB.A.RoC.rVjzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/372
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3539
Status: RO
> You've nailed AOE's use of the playbook pretty well; that's exactly what
> the playbook does. It's essentially a handy way to store the data the
> AIP uses to figure out where to move his units, when to attack, where to
> feint, etc. That data had to go somewhere, so we put it in the
> "Playbook".
It sounds very interesting. I've long wondered if a good Napoleonic
game couldn't be done with a variation of this "playbook" approach,
considering how doctrine played such a large part in the conflicts of
the day.
> FWIW, the playbook was the last military manuever model I did for AOE.
> We've kept the other three for variety and level of difficulty sake,
> though.
Good idea.
>
> (suggested enhancements deleted)
>
These would all improve overall gameplay IMO.
> If you do all of this and then dump the resulting play into the same
> book as the rest, you should then be able to see your AIPs start running
> the plays you run against them. The difficulties that would probably
> come up are:
> * How to determine what units are in the "play" you're watching
> and which ones aren't?
Hmmm....maybe use some variation of influence mapping? Anything not
within a zone of a certain value is considered to be "not in play" for
the purposes of considering a particular attack.
> * How to calc the real quality of the play when all you have to
> measure it by is your AIP's potentially flawed response?
Depends on the game I'd think. In some games you can measure
hit points lost vs. hit points taken from the enemy; in others it's
an "all or nothing" combat and you simply measure # of units surviving
vs. # killed. You'd probably want to toss unit quality in there, though,
so an infantry unit isn't counted the same as a battleship.
> * How to come up with a solid rejection mechanism so you don't
> keep adding new plays that are really just non-substantive variants of
> others (i.e. a non-retreating, frontal assault is pretty much the same
> with axemen as with lasermen, etc.)?
Hmmmm...not sure about this one.
> A question of mine: How much is too much? Is there a line of
> usefullness that can be drawn between useful learning and overlearning
> (in this playbook context)? If you have 10 basic plays, do you really
> need anymore? Is it game specific?
An excellent point I was hoping somebody would make. There is
surely a point at which there's no point in adding more AI (or more
variations of AI)....if I already have a playbook of, say, 15
different opening strategies, I would think that that allows for a LOT
of replay value before the user begins to really notice anything
predictable.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Hired Gun, Gameware & AI ____/ \___/ |
| Wyrd Wyrks Consulting <____/\_---\_\ "Ferretman" |
| Phone: 719-392-4746 |
| E-mail: swoodcoc@concentric.net |
| Web:
http://www.concentric.net/~swoodcoc/ai.html (Dedicated to Game AI) |
| Disclaimer: Yeah, I work for Lockheed-Martin Real3D....you think |
| anybody there ever listens to *my* opinion? Get *serious*. |
+=============================================================================+
From Swoodcoc@concentric.net Thu Jul 17 11:09:50 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id LAA05639; Thu, 17 Jul 1997 11:09:49 -0400
From: Swoodcoc@concentric.net
Received: from darius.concentric.net by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id LAA13305; Thu, 17 Jul 1997 11:09:47 -0400
Received: from mariner.cris.com (mariner [206.173.119.83])
by darius.concentric.net (8.8.5/(97/05/21 3.30))
id LAA09337; Thu, 17 Jul 1997 11:09:45 -0400 (EDT)
[1-800-745-2747 The Concentric Network]
Errors-To:
Received: by mariner.cris.com (8.8.5) id LAA26777; Thu, 17 Jul 1997 11:09:44 -0400 (EDT)
Message-Id: <199707171509.LAA26777@mariner.cris.com>
Subject: Re: Game AI (fwd)
To: woodcock@real3d.com (Steve Woodcock)
Date: Thu, 17 Jul 1997 11:09:44 -0400 (EDT)
X-Mailer: ELM [version 2.4 PL25]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 4472
Status: RO
Forwarded message:
From gamedesign-request@mail.digiweb.com Thu Jul 17 04:45:02 1997
>Return-Path:
Errors-To:
Resent-Date: Thu, 17 Jul 1997 04:43:31 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
Message-Id: <199707170842.KAA18749@hq.seicom.net>
From: "Robert Blum"
To:
Subject: Re: Game AI
Date: Thu, 17 Jul 1997 10:41:29 +0200
X-MSMail-Priority: Normal
X-Priority: 3
X-Mailer: Microsoft Internet Mail 4.70.1157
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"hSvN_B.A.6MG.Hpdzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/366
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
From : woodcock@real3d.com
[DungeonKeeper talk deleted]
> Yeah, the AI is a bit weak at times. Word on the Net from the
developers is
> that they tried to "tune" the AI for each specific mission. This has in
> turn yielded massive criticism from players who find it smart on one
level
> and dumb as a brick in the next. Which leads naturally to my
question....
>
> Do we *really* want to build AIs that try to match the experience
level
> of the players? After all, that's what the DK designers did...and
they've
> caught nothing but flack for it.
Well, the problem is the developers didn't implement an adapting AI. If I
have to tune my AI for every level, I'm heading the wrong way (IMHO)
I want to decouple programming and level design as much as possible. After
all, that's the thing that made Doom/Duke/Quake so succesfull. The players
could build their own worlds without worrying about any AI or other
programming stuff.
Of course, Duke/Quake allowed programming, but the merits/disadvantages
have already been discussed in the scripting thread.
> I submit that, perhaps, we don't have
> to worry about an AI "so good it overwhelms the player" (as somebody else
> has suggested)...we have to worry about building an AI good enough to
give
> a good fight, period.
Definitely. The only thing being the AI has at least a bit to adapt to the
player.
[snip]
> Having my soldiers be smart enough to run away from flamethrowers
would
> be a great improvement over C&C's AI. Mind you, adaptive ALife
technology
> is NOT needed for that, and could in fact lead to complications--if I
> ORDER the solider to suicide I sorta expect him to do it. That might not
> be a realistic outcome for the "real world", but for a "game"....the pawn
> doesn't get a vote. I could see some very frustrated players if units
> in a C&C game absolutely refuse to obey orders.....
Hmm... That depends. If I want a realistic game, my soldiers must be able
to refuse to obey completely silly orders. I'd say it depends on the level
of control you're working at.
If the order comes from your sergeant, pointing a gun at your head, you
obey. If it's something coming from the HQ, some units might do something
wrong. But this shouldn't break the game. As soon as I stop micro-managing,
I don't care what a single unit does, if the whole thing goes as planned.
You'd probably have some kind of rangers/marines for the suicide missions
anyway. Nobody expects a supply officer to operate behind the enemy lines
:)
Bye,
Robert Blum
(Lead Programmer at Rauser ADVERTAINMENT(tm) GmbH)
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Hired Gun, Gameware & AI ____/ \___/ |
| Wyrd Wyrks Consulting <____/\_---\_\ "Ferretman" |
| Phone: 719-392-4746 |
| E-mail: swoodcoc@concentric.net |
| Web:
http://www.concentric.net/~swoodcoc/ai.html (Dedicated to Game AI) |
| Disclaimer: Yeah, I work for Lockheed-Martin Real3D....you think |
| anybody there ever listens to *my* opinion? Get *serious*. |
+=============================================================================+
From woodcock@real3d.com Thu Jul 17 11:24:12 1997
Return-Path:
Received: from mailrelay.real3d.com by real3d.com (SMI-8.6/SMI-SVR4)
id LAA06207; Thu, 17 Jul 1997 11:24:11 -0400
Received: from mail.digiweb.com by mailrelay.real3d.com (SMI-8.6/SMI-SVR4)
id LAA13361; Thu, 17 Jul 1997 11:24:10 -0400
Received: (from condor@localhost)
by mail.digiweb.com (8.8.5/8.8.5) id LAA13995
for woodcock@real3d.com; Thu, 17 Jul 1997 11:23:11 -0400 (EDT)
Resent-Date: Thu, 17 Jul 1997 11:23:11 -0400 (EDT)
X-Authentication-Warning: mail.digiweb.com: condor set sender to gamedesign-request@digiweb.com using -f
From: woodcock@real3d.com
Message-Id: <9707171522.AA06348@stargazer.real3d.com>
Subject: Re: Game AI (fwd)
To: gamedesign@mail.digiweb.com
Date: Thu, 17 Jul 1997 11:22:46 -0400 (EDT)
In-Reply-To: <199707171509.LAA26777@mariner.cris.com> from "Swoodcoc@concentric.net" at Jul 17, 97 11:09:44 am
X-Mailer: ELM [version 2.4 PL25]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Resent-Message-ID: <"YIgTS.A.SXD.Wijzz"@mail>
Resent-From: gamedesign@mail.digiweb.com
Reply-To: gamedesign@mail.digiweb.com
X-Mailing-List: archive/latest/373
X-Loop: gamedesign@digiweb.com
Precedence: list
Resent-Sender: gamedesign-request@mail.digiweb.com
Content-Length: 3630
Status: RO
> From : woodcock@real3d.com
> [DungeonKeeper talk deleted]
> > Yeah, the AI is a bit weak at times. Word on the Net from the
> > developers is
> > that they tried to "tune" the AI for each specific mission. This has in
> > turn yielded massive criticism from players who find it smart on one
> > level and dumb as a brick in the next. Which leads naturally to my
> > question....
> >
> > Do we *really* want to build AIs that try to match the experience
> > level of the players? After all, that's what the DK designers did...and
> > they've caught nothing but flack for it.
>
> Well, the problem is the developers didn't implement an adapting AI. If I
> have to tune my AI for every level, I'm heading the wrong way (IMHO)
> I want to decouple programming and level design as much as possible. After
> all, that's the thing that made Doom/Duke/Quake so succesfull. The players
> could build their own worlds without worrying about any AI or other
> programming stuff.
I agree, which is kinda the point I was trying to make. I don't think
an AI should be "tuned" to various levels or missions without a specifically
good reason (such as a series of training missions). My approach has always
been to build ONE AI with areas or subcomponents that could be switched on
or off as needed to reflect overall game difficulty. I don't think players
necessarily WANT an AI that plays well on one level and poorly on the next;
it's jarring and inconsistent.
> > I submit that, perhaps, we don't have
> > to worry about an AI "so good it overwhelms the player" (as somebody else
> > has suggested)...we have to worry about building an AI good enough to
> > give a good fight, period.
>
> Definitely. The only thing being the AI has at least a bit to adapt to the
> player.
Agreed. Adaptive is good. I just don't think we realistically have
to worry about building an AI "so good no human can beat it". Unless you're
talking about a situation in which reaction times are the overriding factor
(such as Quake), the present level of AI technology tells me that we need
to pull out all the stops just to make the AI a solid opponent.
Mind you, I'd love to *have* the problem of an AI "so good no human can
beat it"...but we've got to get there first.
> Hmm... That depends. If I want a realistic game, my soldiers must be able
> to refuse to obey completely silly orders. I'd say it depends on the level
> of control you're working at.
Agreed, but on the other hand the player *is* the guy paying for the
experience. To push the point to an extreme, if I buy a game that refuses
to play unless it "feels" like it, what was the point?
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: woodcock@real3d.com |
| Web:
http://www.cris.com/~swoodcoc/ai.html (Games AI page) |
|
http://www.cris.com/~swoodcoc/steve.html (Steve Stuff) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| Lockheed Martin Real3D--get *serious* |
+=============================================================================+
--
J C Lawrence Internet: claw@null.net
(Contractor) Internet: coder@ibm.net
---------(*) Internet: claw@under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...