Jump to content

suggestion re: AI


BDW

Recommended Posts

OK, this is probably asking for WAY too much, but WTF:

One of the cool things about playing other humans is their personality and mood.

I was wondering if it would be possible or if BTS has thought about the idea of having the AI's "personality" change from game to game. Like "agressive", "timid", "reckless" etc. Or what be even cooler is if you were going up against a particular commander with a distinct personality and you could read about him in the briefing. (even if they are fictional it would be cool and add depth to the game)

Link to comment
Share on other sites

Re: AI

I have an idea that has been popping around in my head for awhile. I doubt that it could be implemented, but if I don’t bring it up, it never will.

What I would like is the schematic for the parameters re: what to do in certain situations, a way of changing the parameters and a method of determining the outcome written to a file and a setting to let the game play itself.

The basic idea I want to do is to let a genetic algorithm modify the parameters and optimize the tactics. This would require playing several thousands CM games.

Link to comment
Share on other sites

Guest Mikeman

Pford,

I don't fully understand what you're talking about, but it sounds fascinating. Would it be possible for you to simplify the post above? I'd like to understand more fully what you are getting at.

Mikeman out.

Link to comment
Share on other sites

Mikeman; Pford is talking about a learning AI. They are commonplace in chess and some other types of games. The vapourware that is 'Road To Moscow' was purported to have a learning AI.

Roughly speaking, such a system would optimize itself to whatever player it was 'mostly' playing against.

Tom

Link to comment
Share on other sites

Guest KwazyDog

I *think* pjord is refering to something like a learning version of what is used in Fleet Command, if you are fimilar with that game. What is has is a very simple programing or script language where by you can adjust how the computer reacts to certain threats. Here is an example of one from Fleet Command (which is a pretty cool game for those interested in that sort of thing smile.gif). It is what the computer in instructed to do when it attacks in incoming threat.....

TITLE CIWS Self-defense

RULE Watch incoming threats

IF CLASS = MISSILE AND ( ID = HOSTILE OR ID = UAE OR ID = UPD OR ID = UEV ) THEN

RULE Throw CM if applicable

IF TIMER1 = -1 THEN

COUNTERMEASURE Chaff

SET_TIMER 1 {100,120}

END

SETENTMODE

RULE Default Attack Air

IF RNG > 3000 AND RNG < ATTACKRNG THEN

ATTACK_BEST

END

RULE CannonAttack

IF RNG < 3000 THEN

PRIORITY 250

ATTACK DefenseBullet

END

RULE CannonAttack

IF SOURCE = "Visual" THEN

PRIORITY 255

ATTACK DefenseBullet

END

END

If your read through it it should make some sort of sense. If an emeny threat is detected and if it is in a certain ranges respond in this way, etc, etc. I think the Fleet Command guys did well in this respect, though I was a little dissapointed with FC in other ways (which have been recified in a patch, btw).

Anyways, good idea pjord, but Id guess that CM dosnt operate in quite the same way due it its 'fuzzy logic' nature. smile.gif

Hmm, hehe, after rereading I could be wrong as to what pford was getting at wink.gif ?

[This message has been edited by KwazyDog (edited 11-25-99).]

Link to comment
Share on other sites

For the fuzzy logic, you *could* open up some of the basic parameters for user-modofication.

The downside is that with advanced AI's, it is vistually impossible to say that a simple tweak in X will result in a direct correlation of Y happening.

A tweak of X will alter probabilities of Y, but to make it certain, you have to adopt a 100% probability of Y given X, then you have a simple rule-based algorithm, which will get boring.

It actiallu takes quite a bit of fine tuning, and a HUGE AI matrix/tree to make an AI reasonably approximate a person, and that is to approximate a single person.

Given a bit of history in the area, I can say that often times, programmers working on advanced AI's are left with wondering why a certain tweak of X results in Y behavior because of all the interrelationships that occur in a complex AI. In a big AI, there are no simple paths, unless you resort to simple rules with probabiities attached.

Dynamic probabilities are really cool, as they can learn how you play and adjust accordingly. To beat them, you need to either a) have an infallible plan, yeah right, LOL, or B) constantly use different approaches to keep the AI guessing what the appropriate response should be.

Maybe for CM2 we can see a learning AI, but the sheer number of variables involved would be staggering. I'm happy with the AI that can a) beat me 50/50 on the first round on a scenario when I don't read the briefing, B) beat me more often when I make a small mistake, and c) punish me when I make a big mistake.

My 2 cents.

Link to comment
Share on other sites

Guest Big Time Software

BDW, it does have different "personalities" to some degree. Basically it can swing from timid to agressive if the situation is right. I have seen this myself, actually. But the same problems that are preventing massed offensive action is blunting this behavior right now.

Herr Oberst said:

<BLOCKQUOTE>quote:</font><HR>The downside is that with advanced AI's, it is vistually impossible to say that a simple tweak in X will result in a direct correlation of Y happening.<HR></BLOCKQUOTE>

Bingo smile.gif Much easier said than done. All I can say is that the AI will steadily improve with each new version of CM. But for now, besides the fixes mentioned we aren't doing anything more. AI is a time sink and also a testing nightmare. Minor changes can have profound, and very often negative, ramifications. Now is not the time to be mucking about with major stuff like this smile.gif

Steve

Link to comment
Share on other sites

There are several classes of AI programming. A genetic algorithm(GA) is a method of doing a search, and is similar in concept to synthetic annealing. It you have a search-space, say for example a hilly terain in CM, with dense fog and you want your unit to get on top of the highest hill. The classic method is to walkup hill until you can’t go any higher (gradient descent such as the Newton-Raphson (sp??) method). However this may just be a local solution, i.e. there is a higher hill but you can’t see it. A GA is very good in situations like this in getting you to the highest hill but not necessarily on top. It works well if, for example, you have 2 solutions, A and B, and you can tell if A > B, A = B or A < B but not by how much, e.g. a = 2.5B.

This is called the fitness. After a population is ranked in order of its fitness, the genetic strings are mated by crossovering data. Then some of the string is mutated and the process is repeated. This iterative method is slow but fairly robust. In a game like CM it is unlikely to have any advantage in a single game, since the major goal is user entertainment and bad solutions could screw things up, like running Shermans into an open field and making a circle where they all point inward.

In terms of working with fuzzy logic, it should work. The GA could move the cut-off points and adjust the overlaps The GA wouldn’t know what it is doing, it is just choosing the best parents for the next generation. The GA string could be binary 1’s and 0’s. Sections of the string codes for different parameters. The string is then decoded and the parameter is placed into its proper location.

CM parameters could be “tuned” by letting the computer(s) playing set situations for a given population, choosing the best outcomes to create the next generation and repeating it. For speed, graphic could be disabled. On a large scale, a SETI type project where populations are sent to remote computers could be used. Generations don’t necessarily have to be in sync. This is obviously a pipe-dream since this would be quite an undertaking. The US military might be interested in doing something like this, but they already won WWII.

Re: learning on the fly

While a GA is useful for changing situations, say of the hills keep moving, and in the real-world, managing queues, it does this by testing a bunch of solutions. With a really fast system, the computer could test out several tactics. To do it well in real-time I think it would need to be ported to Deep Blue.

Note: when I read this online, there is a major section missing, but is is here in the edit mode. Is the greater than and lesser than signs assigned special code?

I added spaces around the < and > and it looks clean now.

[This message has been edited by pford (edited 11-26-99).]

[This message has been edited by pford (edited 11-26-99).]

[This message has been edited by pford (edited 11-26-99).]

Link to comment
Share on other sites

<BLOCKQUOTE>quote:</font><HR>Note: when I read this online, there is a major section missing, but is is here in the edit mode. Is the greater than and lesser than signs assigned special code?<HR></BLOCKQUOTE>

The greater and less than signs are used for HTML code.

<u><font size="12"face="Helvetica">To do stuff like this</u></font>

The code looks like this: (but without the spaces)

< b >< i >< u >< font size="12",face="Helvetica">To do stuff like this < /b >< /i >< /u >< /font >

Jason

[This message has been edited by guachi (edited 11-26-99).]

Link to comment
Share on other sites

×
×
  • Create New...