Jump to content

My oppinion of what the AI is good for...


Recommended Posts

  • Replies 104
  • Created
  • Last Reply

Top Posters In This Topic

Originally posted by Battlefront.com:

To you, the gamer, this might not appear to be the case. But that is completely irrelevant (and beyond Jason's ability to understand, apparently) since reality does not bend to a consumer's incorrect assessment of how coding works.

Steve

Thankyou for your reply. Actually you missed my point a bit, I was not trying to say that changing the ai is as easy as modifying mg modelling, I understand it's much more complicated. The point I was trying to make about the mg modelling was that no one was willing to complain about it in CMBO, and the few who did were heckled and abused for doing so. So this is the comparisson I'm making, if anyone tries to complain about the ai they are running up against a brick wall. I think it is counter-productive to stick up for battlefront so devotedly when the only thing people like me are trying to do is to point out the weaknesses of their products. Case in point- Oddball, your comment "SHUT IT!", do you realise how insulting that is? Please remember you are addressing a group of adults, not children, and we have a right to express our opinions. The only people who control this forum are the moderators, and I believe that you are NOT one of these people. If you are so upset about the way this discussion is heading, then please post a new thread to put it back to your liking. It's not difficult to do, just click the "new topic" button.

Anyway battlefront I accept your stance on the issue that you have to way up the costs vs benefits of investing time in this. But I would suggest to you that many of your customers are already veterans of CMBO and CMBB, and therefore if you just let the ai go by the wayside and don't invest time to diversify and improve it, you are going to get more dissatisfied customers, because as it is now the ai can only really challenge players who are new to the game. Someone in fact said that the ai SHOULD not be made perfect, well why not? The great thing about combat mission is that there is no perfect strategy, there is no perfect force purchase, and even if you do have a great strategy it can all fall apart in the randomness of war. The CM manual itself says that a lot of what happens relies on luck more than anything else. I am excited by what battlefront was talking about with the idea of aggression settings etc. I don't expect to play a human opponent from the ai, I just want to play an opponent that gives me a challenge and does at worst a minimum of foolish things.

[ October 29, 2003, 04:15 AM: Message edited by: Haohmaru ]

Link to comment
Share on other sites

"Take 'staying in command' for example. In order for this to work as you envision we would first have to make the AI keep units in perfect formation (i.e. in command) under 'combat' conditions and any other conditions for that matter. THAT is the tough part."

Um, the AI doesn't start in command. It doesn't deploy in command at set up. Combat conditions have little to do with it. "Perfect" is laughable. It is a lucky AI platoon that starts with all squads in command radius. It is a lucky AI weapons team that starts with any HQ within 100m. It is a lucky AI gun that starts with LOS beyond midfield.

Meanwhile, you get 9 HQs in a single body of woods. You get 4 HQs on the same tile. Battalion HQs with a tank hunter and 2 light mortars lead assaults. In a meeting engagement typically 2/3rds of the force starts within 30m of the edge of the set up zone, in the open if that is what is there, and regardless of ground height.

Pretending that asking for improvement of this level stuff is asking for perfection or the equal of human thinking is comical.

The AI obviously pays some attention to cover because it starts the majority of its men in cover. 1/4 to 1/3 without sometimes, but most even when cover is scarce and nearly all when it is abundant. It obviously pays some attention to command because it sometimes puts platoons in the same spot. It obviously pays some attention to spread along the line because it puts a platoon or so on a flank while concentrating a third to half its force in a main effort. Since there are several of these things, however, it typically misdeploys up to half its force, because each of them is just a matter of "most".

But it obviously pays no attention to keeping HQs spread because it will put 4 of them on the same tile. It obviously pays no attention to what use to make of its company HQs because it will put 2 of them on the left board edge one behind the other with a single squad each. Not in the heat of combat. At set up. Shall I go on?

Some times the AI can't be improved because that would be a deep blue level effort, but oh it is going to be improved, but it'd be great if you had 6 months just to improve the AI, or more recently 2 months maybe, but improving the AI would wreck the company, and it isn't a good use of time, and it'd take ten years and millions, and lots of other people improving it wouldn't help because everything has already been tweaked to perfection. So sometimes defensive people are silly. It could have been guessed. The only thing that matters on the subject is what will improve the product. Spin won't.

Here is what you need instead. Some top down decision trees and internal planning states (flag variables, routine weights, bias thresholds, yada yada) that have actual tactical content to them. As in, first count companies. Decide side by side or one behind another. If you flip a coin that is fine, but decide. (Formation types are quite finite. And can be in an AI editor, with weights. E.g. Wing attack, turn left, turn right, strong center, broad front, column line wedge yada yada).

Assign those HQs roles (rally, teams, extra platoon). Re-assign a platoon or two, or not. Then place platoon HQs according to that scheme. Then place squads near their HQs. Then place weapons near any HQ, keeping command spans in the 2 to (units/HQs + 3) range.

Then you can worry about fine tuning deployments. And there it comes down to weights. Distance from HQ, distance from start line, cover, field of view (if you can get as fancy as the last). "But we can't get this to work". Um, if you try to do only this step without any of the previous you are trying to put 40 units onto the whole field and you will mess up half of them one way or another.

If you do all of the previous first you are placing 2-7 units all within 60m of a known point. Armies have hierarchical structure for a reason. Don't try to get it to "emerge", put it in. Put it in fuzzed by variety and 1/3rd cross attachment etc, but don't expect it to pop out of random placement of half the units in the middle or whatever.

Once you actually have an internal state for things like the overall formation and plan, and between that level and units you have a structure of HQ properly spread on the map, keeping coordination gets a lot easier. You can give group orders to platoons. You can give the platoon HQ its order first, and give the others orders to move to cover (not known to be occupied by enemy) close to its ending waypoint.

You can have a weight in the order for all the platoons in a given company group coming from their role in the overall formation, and another coming from the plan. Will this weight always determine a move? No, local terrain, or enemy, or internal morale state, may change any of that.

Before deciding on a squad's move the HQs move should first be known. But that HQs movement should reflect variables coming from units under its command. Like how far away they are from the HQ, whether they are pinned or broken, whether they are tired. The HQ should dial down aggressiveness when the men are in bad shape or far away. Instead it wants to be in cover and near the center of the position of its units, but otherwise not moving. That lets the others get on station again.

Company HQs assigned a "rally" function trail their platoons by 200m. They head for red morale friendly units. When those recover above "pinned" they get movement orders to follow that company HQ. It keeps doing rally until it has 3-4 units or there are no other friendly reds (for large cases, you can ignore friendly reds twice as far from this HQ as from some other company or battalion HQ).

Then if it has less than 3-4, all units under it "move" to their original HQ and continues its "rally" function. If it has more, it transitions to an "extra platoon" role instead, keeps its guys, and acts like any other platoon. If it accretes additional units later fine, but it doesn't try to.

If instead you try to get all of this to happen by having one path finding routine used by all units, just with some slight weighting for squads but not HQs to prefer routes closer to their HQ, then you will get spaghetti. And kamikaze HQs that do not wait for their men. You don't have to reinvent the wheel here. Just use the structure built into real armies, in the decision order for movement assignments. It exists to sort out exactly that kind of confusion.

P.S. http://www.gameai.com/influ.thread.html

As for the mere atmospheric spitballs, who cares?

[ October 29, 2003, 06:54 AM: Message edited by: JasonC ]

Link to comment
Share on other sites

Originally posted by Haohmaru:

People claim that scenarios can be tailor made to "assist" the ai. I cannot comment on that, but I have to say that I've played a large number of scenarios on cd and they have produced consistant results. The very first time I played CMBO (aside from the demo) I got a minor victory, I was a complete newbie at this time. Surely the scenarios originally designed and shipped with CMBO and CMBB were designed with the ai in mind.

As the designer of 21st Army Counterattacks, I would like to comment on this.

The scenario is probably not well playable against the AI, contrary to what I thought when I designed it for the CD. The reality is that when we designed scenarios for the CD, at least I was working with a lot of CMBO baggage in mind. It was also the first operation I had designed. It is therefore likely to be flawed.

In the time since I first got a hold of the CMBB beta, April last year I think, I have experimented a lot with this, and I think that a lot of what I did for the CD then, I would no longer do like that again. Live and learn. If you want to get the measure of the AI, you need to pick up something that has been designed recently, by an experienced designer (not necessarily me, other people have done more work since CMBB release, and will have a better understanding). You should also pick up a game that was purposefully designed for single-player. Then you need to leave the forces as set up. If the designer has done the work properly, the forces will stay where they are.

Like humans, the AI seems to sometimes have a 'bad day at the office'. But I have also seen it perform splendidly, in defense. If you play sloppily, it can hand you your rear end. It is not often that it happens, but happen it does.

It will never be the measure of an even semi-competent opponent in multiplayer though. But that does not make it 'broken', or 'awful', or indeed 'embarassing'. The embarassing thing about 21st Army counterattacks are the design flaws by me. Got nothing to do with the AI.

Link to comment
Share on other sites

Nice discussion.

I've been playing a series of Axis assaults versus the AI. The AI gets a +25% bonus, no experience tweak (in CMBO I was able to get to the +50% and +2 level eventually). If I don't get a MAJOR VICTORY, I consider it a loss. I've had to down-tweak to +10%. Overall, a good AI.

As for JasonC, I very much like his suggestions in the post just a few above this. I think that would make the AI much better. I have NO idea how hard or time-consuming it would be to put formations and HQ changes into code. (In the last QB vs AI - which was one of the most enjoyable I've ever played - indeed, the AI set up platoons with 100's of meters between squads and HQ's. Company HQ's were no where near their companies.)

Now, Battlefront, it seems clear: existing AI is good - but make it better. More coherent and more tactically "sensing". I will solve your time issues: I hereby publicly volunteer as a beta tester for CMx2. smile.gif

Regards,

Ken

Link to comment
Share on other sites

Please deposit $.02 (US) before proceeding, thank you!

**Click, rattle, rattle, CLINK**

**Click, rattle, rattle, CLINK**

Okay, now for my two cents worth.

I think the AI does a good job in introducing the newbie to this incredibly complex game. I also think a lot of veteran gamers here forget that this game is a totally unique concept in wargaming. The AI does an excellent job in demonstrating the value of covered arcs, fields of fire, LOS, overwatch, artillery delays, fatigue, morale, etc. While it may not be capable of beating a good human player, it is certainly capable of teaching the finer points to a new player.

The real beauty of this series of games is the multi-player aspect. I, personally, have never enjoyed a game more than this one via PBEM.

The design that allows this is outstanding. Two entire communities (Peng & Cheery Waffle) have sprung up just to share the joy and sorrow of playing this game head-to-head.

To me, this has placed a far greater value on the game.

Furthermore..........

Please deposit an additional $.02 (US) to continue.

Link to comment
Share on other sites

I'd like to chime in here. I am somewhat surprised at Steve's tone and attitude. People would like a more challenging computer opponent. This is clearly the case. Based on my experience, the AI has not been significantly improved in the two and soon to be three CM games. With a engine re-write in the works, it seems like the right time to try and make it better.

Also, I think it important that we define what we mean by the AI. I am NOT referring to the Tactical AI. It is the strategic and operational AI I am referring to. Somehow the AI needs to move units in a more coordinated manner. In open environments (long LOS) armor needs to lead, in restricted LOS infantry should lead. On the defense, units need to stay in Fox holes and not counter attack whenever a flag is taken.

I think spending a bit of time considering improvements in the AI would be well worth it.

Warren

Link to comment
Share on other sites

Right Warren. My previous was an attempt to suggest how to start with that. The basic problem seems to be that the CM AI addresses its basic task as waypoint finding for individual units. What it needs is an internal formation plan, plus an internal assessment of areas of the battlefield. It may have some of these things but they could certainly be better. This intermediary level should actually make the task of the path finding routine simpler rather than harder.

What do I mean by formations? I mean moving the HQs first, and having all of the units currently subordinated to that HQ act based on what their HQ is doing, most of all. And when setting up the HQs, keeping them spread (1) and arranging that spread into a formation plan (2), selected from a small set of possibles. Confusion could still occur, particularly as a result of combat. But it wouldn't start confused.

What do I mean by assessment of areas of the battlefield? I mean a simplified internal representation of the state of control of various points, and influence to other points. You can do this by keeping one array with a numerical value for each 20x20 tile, positive being friendly and negative being enemy dominated. To deal with combined arms effectively it would be good to keep two, one for soft-infantry-HE and one for hard-armor-AT.

Take the soft case first. Start with zeros everywhere. Then for each friendly infantry unit add the fp rating to the middle of its nearby tiles. Reduce by the exposure value of the cover present (.25 rough, .7 times open, etc). To deal with low ammo cases, multiple by ammo remaining/10 if ammo is less than 10. For on map HE you'd use something like ROF x blast / 2 for direct fire, /3 for mortars. Cover reduction for those .5x for trenches foxholes craters buildings rubble, 1x for anything else. Can add a morale scale factor where red contributes nothing, pinned 1/4, up to altered 9/10ths or whatever. Ideally you'd want LOS correction, but it would help just to have the fall off with range.

This gives you a sort of "own firepower grid". You don't have to update it continually, once a minute is plenty. Then reduce its values for known enemy positions. Basically you want to do something similar for the enemy but with negative numbers, estimating "infantry" as a vanilla squad, etc.

One harder bit there is dealing with only semi-known enemies - you see all friendlies but only a few enemy early in an engagement, for instance. This is where a global aggressiveness parameter might be useful. You can assume you've spotted only 1/n portion of the enemy force and multiple known sightings by n (cautious until you see many units) or n/4 (thinks the map is mostly clear until it sees things).

Then the absolute value of the numbers gives you a sense of the front. The zero contour line on such an influence map tells you what parts of the map you mostly own and what parts the enemy owns. Large negative areas locate the enemy. Large positive areas locate friendly higher level formations, above the scale of each unit or even each HQ and its commanded units.

How do you use such an influence map once you have it? Depends on your plan. A column attack plan might look for paths to flags through areas free of large negative numbers. The HQ tagged as lead in the column goes that way, the next follows it, the next follows it. A wing attack plan "pretends" everything on the left half of the map is twice as important as everything on the right. A conservative defense directs forces to the largest negative numbers - reserves go where the enemy is strong.

With an AT map you can use the penetration value of the weapon vs. 30 degree slope at 500m as the primary measure of its firepower strength. As for armor, take the thinnest front plate, and project influence forward 500m and attenuated 1000m from the present facing in a 90 degree fan.

In the armor war is where it would make an absolutely huge difference to have an influence map that took into account LOS. You'd have no cover factors. But from each tile you are already in you check LOS to each other tile, with influence x0 (or x0.1 if you want to "remember" a bit the guy behind the hill) for blocked and x1 for clear LOS.

What do you do with an AT influence map if you have one? You avoid the enemy's strongest AT areas. You put your AT assets in places that increase their total influence without entering enemy influence. You can have threshold effects here, related to own armor. I.e. the thinner you are the more important it is to avoid enemy influence areas rather than to increase your own. (So you will "keyhole"). The thicker you are the more you can just maximize your own (so you can go up on a crest and dominate everything).

Internal influence maps and formation state plans then modify path finding, much as terrain already does. Right now infantry units seek waypoints through cover toward flags, while armor units seek waypoints that are passable to vehicles and move toward flags. The role cover or passability plays in that today can be played by influence map heuristics.

And this need not make everything more complicated to resolve because you can reduce the problem that needs to be solved to a local one ("step") for all but the HQ units. The subordinates want to take steps that improve a measure, that only even looks at waypoints near its HQ (whenever there are any locations near enough to reach e.g. in a minute). That is, only the HQs need to look at the whole map. The subordinates only look at tiny bits around their HQ that they can reach in a minute.

Lots of routines like this have been written for other games, including dramatically smaller games on older platforms, with nothing like the CPU power available for present CM.

Naturally, writing such routines is work, the "one-off" work I've spoken of repeatedly above. If you write them once with a high level of flexibility and generality, however, and make the variables involved parameters that can be changed by an editor, you do not have to do all of the additional work of getting sensible let alone optimized behavior out of the result.

You just hand that over to the users. They can find which formations work, which weight to give to armor vs. fp, how important to make HQ station-keeping compared to cover finding, and all the other trade offs that will result. You put the AI in a general enough form that it has an internal heuristic "state" which has parameters. Control freak users will then iterate like mad on those parameters, to find combos they like, or just ones they don't know yet and so don't expect.

Use whatever is useful...

[ October 29, 2003, 04:22 PM: Message edited by: JasonC ]

Link to comment
Share on other sites

Interesting discussion indeed.

Although i wish JasonC were right, i have to agree with Steve:

investing that much time into the AI isn't worth it.

Ofcourse we all would like to have a CM AI as strong as in chess-programs.

But dreaming about one thing and reality are two different things.

First we should remember, to see the AI as a training help and not as a replacement of a human oponent.

If you want the real CM-experience, play against a human oponent.

Old but true.

And here lies an enhancement, that BTS could definately make: 1-turn PBEM.

IMO it would be much better, if BTS invests it's time into making 1-turn PBEM possible, than tweaking the AI.

Therefore PBEM-playing would become even more attractive, three times faster and playing the AI automatically less attractive.

But nevertheless, i want to throw in my 2 cent about the AI, although i think they will contain nothing really new:

AI now works without ANY memory. That in mind shows the tremendous good job, that was done programming it. Unbelievable.

1. every unit gets it's memory.

Problem: turn size grows with the time the battle lasts.

Solution: storing only rudimentary data (i.e. tank/infantry type, threat type & location, moving direction (vector)); deleting 'old' data

2. the AI gets three layers

Maybe it is that difficult now to make changes to the AI now, because there do not such independent layers exist.

Every single tweak now, simply affects everything else.

lowest layer: 'TacAI' - it determines how the single unit has to act to survive - depending on weapon-class, each one has it's own set of rules;

units receive their orders from the medium layer;

medium layer orders have lower priority, except in very special situations, where the individual 'TacAI' rules to survive can become overwritten by the medium layer (i.e. during an attack or advance decided by the HQ from the higher layer);

under normal conditions the place where the unit is needed and when, is given by the higher layer, but HOW this is done (the path), is determined by the 'TacAI' of the unit (keep out of LOS, in woods).

To me it seems, this is the strongest point of CM. Units in danger already act mostly as they should.

medium layer: the HQ-layer - it commands the HQs and determines how the squads are spread/placed and contains also the rules for support weapons for infantry, if necessary;

tank-tactics are decided here, with the actual information supplied by the 'TacAI' layer about enemy positions and movements and with the needings for the 'StratAI'-layer;

i.e. it keeps tanks massed when attacking (one of the real strenghts of the AI: LOS-calculations and timing calculations for a synchronous appearing);

the best advance paths torwards the key locations for the HQs, are given by the 'StratAI' unless becoming obsolete by 'TacAI' experiences;

where the squads are needed is determined here, depending on the 'StratAI' decisions.

This layer seems to offer the most improvements for the AI now, 'cause it coordinates single units into formations with certain tasks.

highest layer: the 'StratAI' - it decides about the victory locations, or key-positions that need to be taken first;

it even decides, if forces are concentrated on one location (victory flag) only, while ignoring the others; it makes the decisions about the terrain, marks 'allowed' and 'forbidden' areas for the other layers; it predicts where the enemy will start, where he will advance, where his target locations will be and it chooses the own 'force-centers' (infantry only, AT, tanks,...) acordingly;

it decides about HQ-status between preparing for attack (moving into attack positions) and the attack itself;

it also contains the cheating for the AI ;)

[ October 29, 2003, 08:18 PM: Message edited by: Steiner14 ]

Link to comment
Share on other sites

Ah, another easy-to-implement approach to AI by JasonC. Perhaps we should for once accept that in a program as complex as CM there are practically no easy-to-implement changes.

I have proposed a few of those myself in the CMBO days (per-weapon ammo tracking for squads and a secure 2 email per turn PBEM system come to mind), but if Charles says that there are subtle problems in the code that make those improvements quite tricky and time-consuming to implement, I have to believe him. After all, he's the one who knows how the CM source code works, not me. And I'd think that changes to the AI are much more complex than my proposals above.

Perhaps the guys advocating some easy improvements should go back and read about the TCP/IP floating point problems during the development of CMBO 1.10 and the amount of time needed to fix them.

Dschugaschwili

Link to comment
Share on other sites

Some things I do or don't do when playing against the AI.

I don't use the initial setup phase on a QB scenario. (Except for making sure vehicles that cannot move in heavy terrain are not in heavy terrain on the maps edge).

I always give the computer AI at least +25% more units. I also set the AI up with the "highest" units it can have. I also give it a +3 bonus. I never take "highest" units, will either be medium or green.

Heh try a game like this vs a computer "assault" and see how well you do. Also minimum of 30 turns, gives the computer time to smash you up pretty good.

It's really the only way to have challenging games vs the computer AI, while the human has more strength in "intelligence", we must give the computer AI more strength in "numbers". It's a fair balance. Then one can ask, how strong is YOUR intelligence? heh We might see someday computer AI FORUMS discussing the qualities of human intelligence! lol

Some examples of computer AI forum talk:

"Man that human intelligence I play with suxors!!"

"They play so unfair, you should see how they "cheat"! (lol)

"I beat the human intelligence so badly, he broke my CD" and erased me from his hard-drive".

"Talk about "cheating" everytime I'm beating him, he just reloads a previously saved game"!

"Heh, when I'm losing, I just crash to desktop" (lol)

kellysheroes(wanna play a game?) chuckie cheesy smile

[ October 30, 2003, 09:02 AM: Message edited by: Kellysheroes ]

Link to comment
Share on other sites

Warren Peace,

I'd like to chime in here. I am somewhat surprised at Steve's tone and attitude.
To understand this you have to put things in context. And that is dealing with people, like JasonC, who have can't put their money where their mouth is, yet have zero problem mouthing off. His condescending, egotistical presentation (which is very, very well known here on this Forum) makes it even worse.

People would like a more challenging computer opponent. This is clearly the case.
Very true. And I want a pickup truck that gets 100 mpg. Hell, I'd settle for 40 mpg! But chewing out GMC engineers and basically calling them unimaginative and incompetent does nothing to advance this.

The main problem here is that Jason thinks that he is onto something. The fact is that he hasn't a clue what he is talking about. Not even a hint of one. Yet he drones on and on as if he does.

Has Jason ever written a computer program? How about a complicated one like CM's simulated world? Ever written an AI? Charles has done all of this several times. And innovative ones by any standard. So why is it that he is so convinced he knows what he is talking about and Charles doesn't?

Based on my experience, the AI has not been significantly improved in the two and soon to be three CM games.
Basically correct. There were tweaks and new behaviors added to CMBB and CMAK (think multi-turreted tanks work without special coding?), but nothing fundamental. CMBB took 2 years to complete without any AI coding, CMAK 1 year. Charles hit the point of diminishing returns on AI programming within CM's code framework before CMBO was even done. To invest any amount of time on the AI would not amount to much and at the same time would put us at financial risk. Not to mention the abuse from customers who want the game yesterday.

With a engine re-write in the works, it seems like the right time to try and make it better.

Also, I think it important that we define what we mean by the AI. I am NOT referring to the Tactical AI. It is the strategic and operational AI I am referring to. Somehow the AI needs to move units in a more coordinated manner. In open environments (long LOS) armor needs to lead, in restricted LOS infantry should lead. On the defense, units need to stay in Fox holes and not counter attack whenever a flag is taken.

I think spending a bit of time considering improvements in the AI would be well worth it.

The problem here is "a bit of time". Planning, coordinating execution, and adaptation of strategic plans can not be done in "a bit of time". This sort of programming is intensive, difficult, and full of dead ends. Meaning, Charles spends 3 weeks coding something only to find out that there is some sort of logic hole that requires another 3 weeks of coding, only to find another logic hole that requires... so on and so on.

Jason howls that he is not requesting Human like intelligence. But he is. Over and over again. The reason is that he fails to understand is that the CONCEPTS themselves are Human and therefore require Human like intelligence to execute. Prime example from above:

Just use the structure built into real armies, in the decision order for movement assignments. It exists to sort out exactly that kind of confusion.
The struture in real armies is an artificial construct requiring, at all times, Human input in order to be maintained. Even in the highly trained US Army, with all sorts of neato gizmos, coherent C&C is often the exception than the rule. Yet Jason thinks Charles can sit down and in a couple of afternoons come up with something that smart, thiking, trained individuals have a bloody hard time doing. I don't expect Jason to see this simple truth, but I would hope the others in this thread can.

Steve

Link to comment
Share on other sites

"the CONCEPTS themselves are Human and therefore require Human like intelligence to execute."

No, they only require human intelligence to program, or even to instruct a program written with sufficient generality (parameters).

In the specific case you mentioned I was talking about real army hierarchy. You seem to think this means everything a real army actually does to deal with confusion. It doesn't. The concept involved is "hierarchy". Which is a purely formal structure. It means nesting - squads inside platoons inside companies.

Each squad does not make decisions independently and expect the result to add up to a company. Instead, the company makes one simple decision first and sends it as a direction to platoons. Platoons are directed essentially by its leaders going somewhere themselves. Squads orient on those leaders according to a small number of possible schemes.

Can humans adjust all of that with many layers of fine tuning? Sure. But that is not what is required. All that is required is the same hierarchical decision order. That alone will produce the ordering effect the formal structure itself supplies. You can order subroutines inside each other in exactly the same way. It is done all the time, because the same scheme of hierarchical nesting sorts out many of the complexities of code.

Right now the CM AI gives orders to every unit on its side. Without any change in how those orders are given (though those also could be improved etc), they can be given to only the platoon HQs, first. While units under them seek not flags via covered routes, but e.g. cover within 50m of their HQs waypoint. After the HQs have all plotted.

This is a conceptual change from the programmer. But it does not require more "human like thinking" from the AI. It just treats the HQs minute-ahead waypoint as the "flag" of subordinate units, while the HQs treat the actual flags as flags, like they do now. Presto, hierarchy.

(If you want both to be possible, put in a knob. Squad infantry orient on: flags, own HQ, spotted enemy... User adjusts).

Of course I've written programs and complicated ones. What is needed here is not however greater programming virtuosity - I'm sure Charles is superlative in that respect - but more tactical analysis pre-applied to the AI scheme.

I think in the long run the only way to get that to be truly outstanding is to have a lot of time spent iterating on it by a lot of users, supplying said time gratis. So the bottleneck step is to get an AI scheme general enough they could modify its tactical analysis, in effect.

The human thinking involved would come from users. They will notice what the AI is doing wrong, and find the more sensible things it could try. But they need dials in their hands to do this. Building those dials is work, no question. Not impossible work, and not demanding deep blueness or human thinking, but work.

Link to comment
Share on other sites

Steiner14

Although i wish JasonC were right, i have to agree with Steve:

As a designer, and not a programmer, I too would LOVE to think Jason is right. But I have been doing this stuff for 10 years and worked with a dozen programmers on as many games. That experience tells me that Jason is wrong. Unfortunately Jason thinks that dreaming about something and actually doing it put us on an even footing in terms of credibility. I expect that from him, but am disapointed that others can't see the simple facts here.

investing that much time into the AI isn't worth it.
From a purely economic standpoint, AI is the worst investment any game developer can make. It is the most time consuming and difficult thing to do, yet will always be torn to shreds by gamers no matter how good it is. And as Michael said, the future of gaming is multiplayer. Better to spend our time making much better games than to make marginally better AI that will never be valued in realtion to the effort we put into it.

Ofcourse we all would like to have a CM AI as strong as in chess-programs.
Nobdoy wants this more than us. Why? So the whiners would have one less thing to whine about (not that they wouldn't still find SOMETHING to whine about with the AI :( ). We could make the AI cheat fairly easily, which would improve it to some degree, but we are philisophically opposed to doing this. So we will just have to see what we can do with the next engine to improve over the previous one. We think we can do a lot, but it will never offer the kind of challenge a decent Chess AI can provide.

But dreaming about one thing and reality are two different things.
Tell that to Jason :D

10 years ago I designed a Learning AI. In theory it would be "easy" to implement. I even had segements working in a spreadsheet. But I quickly learned that my "practical" designs were day dreams. My AI designs following this were far more practical, but still largely impractical to implement. Ah... but why bore everybody with relevant experience and observations when we have so many more enlightened posts from Jason yet to come!

And here lies an enhancement, that BTS could definately make: 1-turn PBEM.

IMO it would be much better, if BTS invests it's time into making 1-turn PBEM possible, than tweaking the AI.

A perfect example of competition for limited resources. Everybody has their own pet feature/improvement to suggest, and we certainly can not do them all. Especially when, like the one you suggest, huge sections of code would have to be rewritten. IIRC Charles estimated 4 months to make a 1 turn PBEM system work within the existing CM code base. Simple to suggest, impractical to implement. At least for the existing engine. Should be no problem for CMx2.

But nevertheless, i want to throw in my 2 cent about the AI, although i think they will contain nothing really new:

AI now works without ANY memory. That in mind shows the tremendous good job, that was done programming it. Unbelievable.

Thanks and largely correct. Some things are retained, but it is indeed very basic because it is difficult to predict and manage. Jason obviously didn't like this coming from me, so I guess he will ignore your comments as well. Afterall, informed comments are easily discarded by him.

1. every unit gets it's memory.

Problem: turn size grows with the time the battle lasts.

Solution: storing only rudimentary data (i.e. tank/infantry type, threat type & location, moving direction (vector)); deleting 'old' data

This is the system that is currently used during a turn. The shortcoming is that little of this information is stored and transfered onto the next turn. This system of basic detials, with very limited scope and parameters, is the only practical methodology to use for the near future.

2. the AI gets three layers

Maybe it is that difficult now to make changes to the AI now, because there do not such independent layers exist.

Every single tweak now, simply affects everything else.

Correct, and it will have to remain that way into the future. It is practically impossible to isolate the various layers from each other. The same is true in the real world. If the mailroom of a large office figures out how to deliver mail at 8am instead of 10am the entire dynamic of the office will change in some ways, perhaps unpredictable ones. Much like how society has been changed by the Internet.

where the squads are needed is determined here, depending on the 'StratAI' decisions.

This layer seems to offer the most improvements for the AI now, 'cause it coordinates single units into formations with certain tasks.

Unfortunately, this is also the hardest layer to program. It is also the hardest "layer" in real armies. It is fairly easy to teach a soldier to clean and shoot a rifle. It is fairly easy to train a Major to know where each Company should be deployed. It is very difficult to figure out all that is between these two. Soviet military doctrine is a prime example of this.

In WWII and after the Soviet solider was a strong force on the battlefield. Time and time again you see German and other Axis accounts of the positive qualities of even poorly trained Soviet soldiers. Time and time again the higher level Soviet planning (especially at STAVKA level) frustrated or defeated the Axis forces opposite them. But at the inbetween layer... static and inflexiable doctrine often carried out with horrible coordination and even determination.

The reason? Because coordinating diverse forces on a battlefield is DAMNED difficult. And for the same reason it is just as bad to program. That is reality.

Dschugaschwili.

Perhaps the guys advocating some easy improvements should go back and read about the TCP/IP floating point problems during the development of CMBO 1.10 and the amount of time needed to fix them.
An excellent point. Problem? Floating point sloppiness in some Pentium processors. Theoretical Solution? A couple of hours/days of smart coding to account for this issue and keep the faster turn processing feature. Real World Solution? Weeks of recoding and the abandonment of the faster turn processing method.

If this were an AI issue Jason would call Charles lazy or unimaginative because he couldn't do the Theoretical Solution. I'd LOVE the chance to sit down and examine Jason's work and evaluate it in the same way. I'm sure I could tear him to pieces no matter what his job is because it is so easy and unaccountable.

Steve

Link to comment
Share on other sites

JasonC,

Um, the AI doesn't start in command. It doesn't deploy in command at set up. Combat conditions have little to do with it. "Perfect" is laughable. It is a lucky AI platoon that starts with all squads in command radius. It is a lucky AI weapons team that starts with any HQ within 100m. It is a lucky AI gun that starts with LOS beyond midfield.
Sigh... Jason, there is nothing "lucky" about this. What you, and other amateurs without any experience, can't grasp is that nothing happens by "accident". Everything that happens was coded to work that way, even if the end results are not good. Charles probably spent weeks coding this part of the AI.

Pretending that asking for improvement of this level stuff is asking for perfection or the equal of human thinking is comical.
Pretending you know what you are talking about is what I find comical. You are asking for a skyscraper but think you are asking for a 2 story tall building. If you are nearly blind it is easiy to understand how you can be so off the mark. And that is why I understand your point of view; because I understand that you don't.

But it obviously pays no attention to keeping HQs spread because it will put 4 of them on the same tile. It obviously pays no attention to what use to make of its company HQs because it will put 2 of them on the left board edge one behind the other with a single squad each. Not in the heat of combat. At set up. Shall I go on?
Please do. Everything you said above is 100% bunk. The AI *DOES* pay attention to all of that stuff. There is in fact specific code that attempts to retain unit cohesion. However, such a "simple" concept is very hard to make the AI understand in context with other deployment/employment requirements. Of course you don't understand this so I am sure you will once again dismiss what is fact because it doesn't fit your uniformed opinions.

The only thing that matters on the subject is what will improve the product. Spin won't.
Exactly. What I am saying is not "spin", but fact. What you are saying is uninformed, naive, and (as is usual with you) condescending horsecrap.

Here is what you need instead. Some top down decision trees and internal planning states (flag variables, routine weights, bias thresholds, yada yada) that have actual tactical content to them. As in, first count companies. Decide side by side or one behind another. If you flip a coin that is fine, but decide. (Formation types are quite finite. And can be in an AI editor, with weights. E.g. Wing attack, turn left, turn right, strong center, broad front, column line wedge yada yada).
This is exactly how the AI is coded. The fact that it doesn't work out like that to your satisfaction is a reflection of the difficulty of the task, not the lack of imagination.

[Typical Jason type rants about how he knows everything with no experience and we know nothing with tons experience deleted]

Just use the structure built into real armies, in the decision order for movement assignments. It exists to sort out exactly that kind of confusion.
I already addressed this in my previous post. To sum it up ... you clearly don't have a clue what you are talking about, asking for, or expecting can be done. All the daydreaming you care to sputter out on this Forum (laced with your trademark attitude) won't change reality. Only we can do that. And if you think we haven't thought of 100 times more "easy" stuff to implement than you have, all I can do is shake my head.

Jason, quick and honest question here. I've been dealing with this stuff, in one form or another, every day of my life for the last 10 years as a full time job. I've worked on dozens of games and produced some of the finest ones ever seen on the market (not that I am taking credit for a huge team effort!!). What have you been doing for the last 10 years? How does my experience stack up against yours? And against Charles?

Steve

Link to comment
Share on other sites

Jason,

Now we are getting somewhere...

No, they only require human intelligence to program, or even to instruct a program written with sufficient generality (parameters).
Programmer intelligence/skill + loads of time are givens when talking about AI programming. The AI can not do anything that it isn't programmed to do, which is also a given. Or should be...

In the specific case you mentioned I was talking about real army hierarchy. You seem to think this means everything a real army actually does to deal with confusion. It doesn't. The concept involved is "hierarchy". Which is a purely formal structure. It means nesting - squads inside platoons inside companies.

Each squad does not make decisions independently and expect the result to add up to a company. Instead, the company makes one simple decision first and sends it as a direction to platoons. Platoons are directed essentially by its leaders going somewhere themselves. Squads orient on those leaders according to a small number of possible schemes.

I understood this concept because it is so very basic. And that is, largely, what CM's AI already does. The fact that it doesn't work as well as you wish it to (or as Charles wishes it to) does not indicate the abscence of such a system. Rather the implementation of a Human like intelligence to make such a system function is what the issue really is all about.

Can humans adjust all of that with many layers of fine tuning? Sure. But that is not what is required. All that is required is the same hierarchical decision order. That alone will produce the ordering effect the formal structure itself supplies. You can order subroutines inside each other in exactly the same way. It is done all the time, because the same scheme of hierarchical nesting sorts out many of the complexities of code.
Without any AI written to be "smart" about how signals are trasmitted, within, up, and down the hierarchical system then the system itself does nothing. That is the part of AI programming that requires, ultimately, Human like intelligence. Anything less than that will produce results that will be seen as "dumb AI". Obviously there are degrees of coding to produce degrees of positive outcomes, but ultimately the better you expect the AI to perform the more Human like you are expecting the AI to be.

Right now the CM AI gives orders to every unit on its side. Without any change in how those orders are given (though those also could be improved etc), they can be given to only the platoon HQs, first. While units under them seek not flags via covered routes, but e.g. cover within 50m of their HQs waypoint. After the HQs have all plotted.
As I understand it, that is what the AI attempts to do. But there are other factors that come into play that can, and do, screw that all up. And yes, I am talking about deployment too. If you go into the editor and buy troops you can see that the code is inherently capable of keeping units together. But when the AI is expected to make a logical deployment in order to acheive a task, that is where the shortcomings appear. Shortcomings that can not be corrected by the system understanding military heirarchy because... it already does understand that.

This is a conceptual change from the programmer. But it does not require more "human like thinking" from the AI.
You're wrong.

Of course I've written programs and complicated ones. What is needed here is not however greater programming virtuosity - I'm sure Charles is superlative in that respect - but more tactical analysis pre-applied to the AI scheme.
Ah... in one paragraph I learn that you do actually code, yet not understand what we are up against AND you somehow can hold Charles in high regard and pee on his efforts without seeing a contradiction. A man of contradictions.

I think in the long run the only way to get that to be truly outstanding is to have a lot of time spent iterating on it by a lot of users, supplying said time gratis. So the bottleneck step is to get an AI scheme general enough they could modify its tactical analysis, in effect.
This can not happen unless a system is coded to allow it to happen. That is in itself a massive undertaking. I am sure you don't agree, but that is irrelevant since you aren't the one that will have to do it. We won't ever attempt it.

The human thinking involved would come from users. They will notice what the AI is doing wrong, and find the more sensible things it could try. But they need dials in their hands to do this. Building those dials is work, no question. Not impossible work, and not demanding deep blueness or human thinking, but work.
You are talking about something that is not possible for us to create. Not within the constraints of commercial realities, nor within the constraints of computer hardware as we know it. At least not for the type of games we make. For other games (Bolo from olden days comes to mind), perhaps. But not for a tactical game of this depth and diversity. It can't be done in the near future and won't be done. Not by us, not by anybody. You can quote me on that :D

Steve

Link to comment
Share on other sites

Frankly, I should have thought the future lay in multi-player. Look at WW2 Online and other games (that suck, admittedly) that rely on multi-player.

You know, most of the cardboard wargames DEMANDED two players. I don't see why computer wargames should be any different.

I'd even be so bold as to say abandon the AI concept altogether, save the time you're wasting on development, and continue to groom the Tac AI and perhaps a true multi-multi-player environment (ie team play vs another human team).

Link to comment
Share on other sites

"attempts to retain unit cohesion."

Of course - not very successfully. But this was supposedly addressed to a previous point of mine about HQ spread. HQ spread and unit cohesion are two different subjects. Though getting HQ spread correct ought to help with unit cohesion, and especially help avoid overstacking without sacrificing unit cohesion or cover.

Which was my point. The AI does not try to keep HQs spread. You can readily test this to confirm it for yourself. Trying to keep units under command is not the same issue. HQs don't experience command. If, however, you first place the HQs and keep them properly spread - and even in some semblance of a formation, though that is gravy - then the trade offs routines to maintain cohesion along with other priorities will in practice prove much easier for it to solve.

This is easy to see. If HQs pick routes to the objectives based on cover they will often pick the same route. If they have no "self avoidance" or the importance of that factor is too low, then they will cluster along that route. If all of their units attempt to maintain cohesion afterward, entire companies will try to fit along routes 20-40m wide. When they don't, they will face bad choices - get on top of each other, lose cover, or go out of command.

That they can't solve that problem well is understandable. It was made overly difficult for them by skipping a step humans readily understand and impliment - starting with HQs well spread. Humans implicitly do not look at HQs as small 2-6 man units. They (rightly) look at them as 40 by 40 blocks. When an HQ has "exclusive right" to an area around it and thus to the cover in that area, its subordinates can more readily keep station without running into other priorities that tend to disperse them.

Where there are routines that attempt to maintain unit cohesion but run into other priorities, however, is a perfect example of the sort of problem readily addressed by user tuning and iteration, and not by just doing something and leaving it there and hoping it is good enough. You put weights on the factors. But instead of hard coding some averaging of them or selection from among them into a path setter routine, you allow those weights to be tweaked. Not by you. If the user dials "maintain cohesion" to the ceiling and units run out in the open as a result, he can decide if that is worse or better behavior than staying in cover but losing cohesion (e.g.).

As for the spin, brickbats, credibility feathers, etc, it is meaningless noise, devoid of any actual intellectual content. As for the reason to improve the AI, it ought to have nothing to do with complaints and everything to do with simply wanting to improve the game. If it does, who cares whether you aren't petted for it? If it doesn't, who cares if you are?

You said "please do". I don't know if that was meant to be rhetorical but understanding it charitably I will assume not. Noteworthy AI problems from one testbed tonight -

1. With FOW off the AI knows where enemies are at set up and likes to use prep fire at them because of it. But it does not select aim points on top of enemies, only in the same general area. I had a 4 tube German 75 FO target a DP LMG (foxholed, in scattered trees, on a crest) as the closest known enemy, for a prep barrage. There was nothing else near the DP. The aim point was 35m away, about 20m right and a bit more than that long. As a result, only 3 outlier shells had any effect and the maximum achieved was "pinned". By the end of the 3rd minute it was "shaken" and 30 seconds later there was no effect.

What is a better aim point selection procedure? Test tile center placements. Maximize known enemies in an area 20m to either side and 50m long or short. Among those equal (or close) on that score (still tile centers), pick the one that maximizes covered terrain under the same footprint. Then within the resulting tile, pick the point closest to a fully known enemy if there is one. It might also help to have a "should fire" test routine that effectively asks if there is enough there to justify the barrage being ordered (I suggested such a "point adding" system above).

2. In an HQ only (for its force) probe, the AI sent half of them along a single covered route to the nearest flag. One went by another covered route to another flag almost as close. A last HQ waited 3 minutes then picked between these, selecting the second. The clustering along the routes at the same time resulted in 3 HQs in the same building at times.

The lead HQ often reversed when shot at, without getting beyond "alerted" morale, but that was probably just the "cover panic" routine. They also spent too much time running under fire, but that is a movement selection difficulty (would have been fine in CMBO, but "run" is considerably riskier in BB). When one turned back the others kept going, even after orders phases intervened. No interval keeping in other words.

Since fire tends to check the foremost, the result is that troops rapidly pile up along covered routes the instant a block is encountered along that route. Wire readily does the same thing, even in locations easy to spot at set up (the familiar "distant obstacle" path finding problem, particularly acute to "steppers"). They are then at the mercy of HE weapons.

How do humans deal with this problem tactically? Well, they may remember Murphy's adage, "the easy way is always mined". But the main way is by using flexible, more extended formations - including interval keeping aka self-avoidance - and updating their sense of the difficulty of a route. That is, the second platoon HQ does not want to get within 30m of the first. So it does not pile into the same tile of woods. They "read" the stopped HQ ahead of the second as "blocking terrain", much as the AI already clearly reads "open ground".

So if the first HQ is not checked the route is used, but by platoons in sequence maintaining an interval. A single arty strike thus only catches one of them. When something stops the first HQ (mentally screaming "kill sack!" to a human), the following platoon is some ways behind. It avoids the blocked unit ahead of it. So even if the terrain to either side is not as enticing, it nevertheless heads that way - to one side or the other. By doing so it also "deploys", broadening the frontage of the stopped column.

What is needed to get the AI able to do something approaching this, that is not already there? An HQ (only) should read terrain within 40m of another friendly HQ as though it were "poor cover". That plus station keeping (that can be "dialed up" in importance) for the subordinates of HQs, would make a big difference to this common AI blunder.

Incidentally, the same sort of "adjusted terrain reading" approach could be applied to the front HQ, if it "reads" subordinates dropping in morale the previous minute as "evidence" it is in poor terrain, regardless of what the percent exposed numbers tell it. While areas where morale state is steady or improves - especially if own side ammo counters go down - would be read as useful terrain.

3. When fired at along a route, some units will orient on the shooter rather than on the flag they were headed for. This is useful behavior. But sometimes they then just walk straight toward said enemy until stopped by fire. I've seen this mean stepping out of 14% exposure woods into 70% exposure open ground when already 65m away from an unsuppressed enemy. Humans don't do this. What do humans know that the AI doesn't, in this case?

They know that good cover within 70m or 100m (depending on infantry type) of an enemy is about as good as it gets, and that the last portion of ground is really taken by fire not by just walking onto it. It is something like "when to sit and shoot" that is off. This varies by unit type. Similar problems are seen with the dreaded mortar charge, and the rush of HMGs for the flags in so many AI counterattacks. Implicitly, the AI is always thinking about where to go. Humans think far more in terms of where I can shoot.

It seems to me the main way to address this is to have something like a sense of proper range for various unit types. (Ideally, dial-able in an AI editor). It can be fuzzy and it need not be the only factor considered. But a mortar that can see enemy outside minimum range and within 500m - or has an HQ with command line that can - should not be thinking about moving. An HMG or sniper that can see enemy in the open, or any enemy within 250m, and is in cover itself, should not be thinking about moving. (How do I know if I can see things? Use the target next routine. If if returns a "stick", flag 1 else 0).

With squad infantry it should be more inclined to move, but for LMG and rifle types 100m and for SMG types 50m is should have similar effects. (Pioneers may want 25m). Close terrain may make it necessary to get even closer to get LOS - that is fine. This factor would still recommend sitting and shooting as long as anyone can be seen, which e.g. in a woods firefight is often the right idea.

Twenty minutes of testing. If you had a hundred users doing that several times a week for 6 months - and they had any kind of AI dials in their hands - you'd get a much better AI at the end of it. You'd get the use made of the dials for free.

[ October 30, 2003, 09:52 PM: Message edited by: JasonC ]

Link to comment
Share on other sites

[Ai attempts coheshion] Of course - not very successfully.
Of course it doesn't. It is the hardest thing to program, for any system not to mention a vastly complex one like CM. And as pointed about above, trained men with tons of internal and external resources to draw from, screw this up more often than not. To expect anything even close to this is to expect Human like AI. There is absolutely no way to argue that this is not true. One can not have Human like coordination without mimicing Human like intelligence. That is the one thing that vexes any AI programmer.

But this was supposedly addressed to a previous point of mine about HQ spread.
It is not because one must first address how to keep/adjust spread in the first place. On a billard table battlefield, with a predictable environment, and predictable outcome... piece of cake to do this. But to come up with a system that works equally well for all groupings in all situations... not at all easy to do at all. Obviously this is something that can be improved upon over what CM as it is now, but only with a lot of programming time (and ONLY programming time).

Which was my point. The AI does not try to keep HQs spread. You can readily test this to confirm it for yourself.
You are again misreading what is happening. The AI is trying to keep units spaced and in command. It is just damned difficult to do in the dynamic settings that CM proposes. And yes, even Setup is a dynamic experience for the AI as it is for the Human player.

Trying to keep units under command is not the same issue. HQs don't experience command. If, however, you first place the HQs and keep them properly spread - and even in some semblance of a formation, though that is gravy - then the trade offs routines to maintain cohesion along with other priorities will in practice prove much easier for it to solve.
Agreed. But you dismiss the difficulty of getting a formation to be properly spread and remain that way. AI is all about steps. One can not move onto the next step until the previous one is reached. The main problem with AI programming is that so much time is spent on the most fundamental steps that there is not enough remaining time to do the following steps in as much detail. The scary part is that the fundamental steps are actually the easiest to program from a conceptual standpoint, just time consuming and very plentiful.

It should be obvious looking at CM that this is the case. You can not have the Strat AI moving around units until the units know how to move. And that is a rather big issue to tackle right there (path finding, balancing environmental factors, etc.). Not to mention how units should behave in dozens of different circumstances. So most of CM's AI programming time went into the TacAI. If that sucked then even multi-player games would suck, which underscores the importance of doing that part best. And this has to be done for each of the different unit types, such as coding support weapons to behave one way and armored vehicles another. Then the medium level AI needs to be programmed to understand that each type of unit is different from the other and that they inherently need to be used in different ways (HMGs tend to find a good spot and fire, Squads move around, Tanks like to engage other tanks, etc.). Then and there is the Strat AI which is its own kettle of fish.

You can see that there is a logical priorities list here. The problem is that the highest set of priorities takes up the most amount of time, leaving little time left for the other systems. And on top of this there are hardware and software issues rearing their ugly heads, not to mention the inherent difficulty of programming coordination, anticipation, reaction, action, etc.

This is easy to see. If HQs pick routes to the objectives based on cover they will often pick the same route. If they have to "self avoidance" or the importance of that factor is too low, then they will cluster along that route.
This was a major problem with Charles' first implementation of the Strat AI. Units stayed together, but by the nature of the logic tended to run into each other and cluster. This has NOTHING to do with weights, everything to do with routines. Charles had to largely abandon the code and rewrite everything. The outcome was something that was inherently better because it was inherently less clustered. And that is what we have today. And no amount of weights adjustments will fix that because the routines can only do so much. And since Charles wrote the routines to mimic standard military concepts, it is obvious that the problem does not lie with not understanding the basic concepts. They are so evident that probably most people on this Forum could write them down successfully. Implementation is where it is at.

Where there are routines that attempt to maintain unit cohesion but run into other priorities, however, is a perfect example of the sort of problem readily addressed by user tuning and iteration, and not by just doing something and leaving it there and hoping it is good enough. You put weights on the factors. But instead of hard coding some averaging of them or selection from among them into a path setter routine, you allow those weights to be tweaked. Not by you. If the user dials "maintain cohesion" to the ceiling and units run out in the open as a result, he can decide if that is worse or better behavior than staying in cover but losing cohesion (e.g.).
Once again this is an oversimplification and a total divorce from the reality of programming. If you are a programmer, you should understand that weights don't make things happen... routines do. The way it works in the real world is that the playtesters give feedback in Human terms ("my TCs button up too easily) and Charles seeks to look at the factors to figure out why and how things might be tweaked without screwing up other things. If the routines can't be sucessfully tweaked to your level of pleasure by the guy who wrote them, a guy with a lot of military and programming knowledge, how can you seriously expect a hack end user to come up with better values?

As for the spin, brickbats, credibility feathers, etc, it is meaningless noise, devoid of any actual intellectual content.
Yes, it is meaningless to the core of this discussion. However, as long as you insist on bringing in your own (in the form of being a pompous, deaf, condescending git in order to get attention and make yourself feel important) these things will show up. Notice that when you drop the attitude your ideas are discussed without all the extra baggage. I pride myself in answering posts in the spirt of they were posted in. If you don't like how I respond to your posts, examine your attitude and motivations before attempting to shuffle the blame onto me.

As for the reason to improve the AI, it ought to have nothing to do with complaints and everything to do with simply wanting to improve the game.
True. But if the person/s making the suggestions are clueless AND insulting AND unreasonable... that pretty much destroys any chance of having a productive conversation before it has even started. I am constantly amazed at how I still try and answer the basic nature of the question/suggestion even with all the baggage that surrounds it. I mean, after all I am still answering your posts with substance, am I not?

1. With FOW off the AI knows where enemies are at set up and likes to use prep fire at them because of it. But it does not select aim points on top of enemies, only in the same general area. I had a 4 tube German 75 FO target a DP LMG (foxholed, in scattered trees, on a crest) as the closest known enemy, for a prep barrage. There was nothing else near the DP. The aim point was 35m away, about 20m right and a bit more than that long. As a result, only 3 outlier shells had any effect and the maximum achieved was "pinned". By the end of the 3rd minute it was "shaken" and 30 seconds later there was no effect.

What is a better aim point selection procedure? Test tile center placements. Maximize known enemies in an area 20m to either side and 50m long or short. Among those equal (or close) on that score (still tile centers), pick the one that maximizes covered terrain under the same footprint. Then within the resulting tile, pick the point closest to a fully known enemy if there is one. It might also help to have a "should fire" test routine that effectively asks if there is enough there to justify the barrage being ordered (I suggested such a "point adding" system above).

All of this requires a lot of coding. And for every 10 things you have identified, ther are probably 100 you have not. That is the maddening thing about AI programming for a game like CM. There are just so many things to identify, so little time to account for them in code.

BTW, the AI does most of what you suggest it should. It just doesn't do it to the degree you want it too.

How do humans deal with this problem tactically? Well, they may remember Murphy's adage, "the easy way is always mined". But the main way is by using flexible, more extended formations - including interval keeping aka self-avoidance - and updating their sense of the difficulty of a route. That is, the second platoon HQ does not want to get within 30m of the first. So it does not pile into the same tile of woods. They "read" the stopped HQ ahead of the second as "blocking terrain", much as the AI already clearly reads "open ground".
All of this MIGHT be true, but the AI can't do squat about any of this unless it is specifically programmed to do it. It is, to some extent, but because of the real world limitations of coding such behavior it is not uncommon for it to screw up. However, sometimes the code works as intended for the given circumstances. Of course, these are dismissed by you as "luck" when in fact it is the opposite (i.e. the bad stuff is clever programming failing to do what it was meant to do, the good stuff is clever programming doing what it was meant to do)

What is needed to get the AI able to do something approaching this, that is not already there?
Human like intelligence transformed into robust routines inbedded within a much larger AI system which won't do anything to undermine the behavior. In other words, something that is not practical.

An HQ (only) should read terrain within 40m of another friendly HQ as though it were "poor cover". That plus station keeping (that can be "dialed up" in importance) for the subordinates of HQs, would make a big difference to this common AI blunder.
Much easier said than done. You are talking about a single example of the much larger "situation awareness" issues of coordination. If Charles could program stuff like this as well as you expect, he could make millions writing AI instead of peanuts making wargames.

They know that good cover within 70m or 100m (depending on infantry type) of an enemy is about as good as it gets, and that the last portion of ground is really taken by fire not by just walking onto it. It is something like "when to sit and shoot" that is off.
This is probably the one feature of the AI that Charles has spent the most amount of time coding, tweaking, recoding, tweaking, recoding, etc. It is also one of the most difficult things to do in the current system because units do not have any memory in this regard. It also runs afoul of the multitude of different conditions because things like terrain types, weather, type of unit, Morale, Experience, level of threat, cross fire, etc. etc. all come into play.

Like your other examples, this is a piece of cake to observe and point out. The "obvious and simple" suggestions you make might be obvious, but they aren't simple to code. If they were, it would have been done back in CMBO Beta since we identified these same issues with CMBO Alpha. Since it is obvious that you aren't telling us anything we didn't know 4 years ago, this leaves us with two possibilities:

1. Charles is a hack programmer or one that didn't try hard enough to address the problem.

2. What might look simple is actually not, and you guys are lucky that it is as good as it is (i.e. compared to other wargames CM's AI is outstanding).

The limited time is a given so I am not even including that. And there you have it... bottom line choice to make is between these two possibilities. Which one do you feel applies?

Steve

Link to comment
Share on other sites

besides trying to make the ai smarter you can try to make the human dumb.

for example, make something like franko's ironman rules that shuts off the 3-9 level views and zoom for non binocular/optics wielding units. whenever i play it like that i spend half the time going "where the hell is the enemy? where the hell am i?"

another thing already mentioned a couple times above is randomness: make ais that do attack "at all costs" or "minimal losses".

another possible game possibility is the "mindf--- the player option"

i had one scenario where i made the germans attack a small platoon with tons of infantry and a couple dozen tanks while lightly armed soldiers and unarmoured vehicles and almost no antitank weapons in a nearby town were supposed to evacuate over a bridge and then blow it up. most of the playtesters found it fun and managed to get their guys in town over the river. one tester though decided to just stick it out and made some moving ambushes and made a stand in town where the ai failed to push hard enough.

so you could try and fool or lie to the player about enemy abilities on occasion.

Link to comment
Share on other sites

Wow, with even some scenario designers chipping in here, this is becoming a big discussion! It is starting to gain somewhat of a nasty tone however between Steve and JasonC, so I think I am going to pull a Romania and step out of the battle. I do like JasonC's suggestions but as Steve has plainly said, they are the ones who understand what can and can't be done with the code in its current form and he doesn't. Not knowing anything about it myself, I think JasonC has underestimated the complexity of the game. Example, about the idea of analysing the map grid for threat zones, good LOS zones etc, that is ok but one 20m x 20m tile is not uniform, there might be one good position on it with good LOS + cover, but another part of it might obscure LOS altogether. So for the ai to really make good choices in this fashion it would have to explore all angles of LOS from every single metre in the setup zone, this is a huge task.

I don't like people's attitude that this should be multiplayer only. If it's going to be that way you might as well make it an MMORPG! I like to play pbem, but I don't like waiting 1 month for the result. It is good to have the facility to be able to set up a quick game and finish it in one evening. Yes I know there is tcp/ip mode, but I don't like to have time pressure, and the good thing about the ai is that it doesn't complain about doing the orders turn slowly!

Anyways I have one final question in this discussion. The author of 21st Red Army Counterattacks has blamed himself for the ai's behaviour, rather than on the ai itself. So my question is, what scenario included with the cd would people recommend is a good challenge against the ai? And I'm not talking about making a challenge by adding +50% or more and +3 experience, I mean maximum +25% and +1, which is usually what I give it. Do any of the operations offer a really worthwhile challenge? I particularly like operations due to their epic nature, but I usually find that the ai has capitulated after about 2 battles and then it's not fun anymore.

In conclusion, I hope CMx2 shows a much better ai. Indeed, the whole game should be revolutionary, just as CMBO was.

Haoh

Link to comment
Share on other sites

"One can not have Human like coordination without mimicing Human like intelligence"

Um, a strawman and the same one. Every AI actually in existence fails to mimic human like intelligence. But if the situation it is dealing with is simple enough, or involves rules that have been elaborately axiomatized enough by prior human effort, any number of them do manage to produce such coordination. Station keeping isn't exactly reserved to human like intelligence. Birds do it. Swarm models do it. In 2 D and with fixed characteristic scales it is easier. Humans also don't get men in nice fully located 10-man sacks that don't spill, either. Nor is the command span huge. It is typically around 5, with only 2 levels of hierarchy.

"one must first address how to keep/adjust spread in the first place."

No. It is easier to keep or adjust spread in the first place if you first make that problem simpler for the base level routines, in practice, by making them solve only the simpler problem of "right around my HQ", instead of the global problem of "all units". Which, if you force HQ separation, will solve the other problem pretty well.

As for how to force the HQ separation, that is much easier than trying to force all separations by one routine. Because many of the other factors that matter for 2 squads, do not matter for 2 HQs. HQs are not the primary combat elements. Their primary role is coordination. A routine can force their separation much more strongly than it could forces squads to separate, without resulting in tactical problems. There are also few HQs on the map, and fewer still in any given area if the separation works at all. So issues like cover scarcity don't arise with any like the urgency for just HQs.

Coding wise, you already have a path finding routine that favors better cover over worse. You just fool HQs only into thinking the areas right around its own HQs, for the purpose of its own movements only, look exposed. It can then still go through such places if it looks urgent enough to get somewhere it needs to be - as it will cross open ground now. But it will strongly prefer a "covered route" - equals HQ spread.

Then instead of needing to tell all units "stay in command but stay spread", you've already got the basic spreading just from "stay in command". Because stay in command now forces units to follow HQs that avoid each other. So a much weaker "stay spread" effect can do the job adequately. If one platoon happens to deploy tightly to squeeze through a spot where cover is only 20m wide, that is just fine. Humans do that too. They just don't put a whole company there.

"to come up with a system that works equally well for all groupings in all situations"

If you try to get one routine to work for all units, without any bias to them from their unit type, sure. But there is no reason one needs this. I've given you the rest of such a system, as an adjusted use of the path and terrain behavior you already have.

I said "The AI does not try to keep HQs spread". Fact. I said HQs. It is a fact. You say in response "You are again misreading what is happening. The AI is trying to keep ***units*** spaced ***and*** in command." I am not misreading what is happening. Keeping units spaced and keeping HQs spread are two different things.

Units spaced means wanting some minimum distance between any one unit and another. HQs do that as much as the next. But it means they try to stay 10m apart. It treats the HQ as any ordinary unit, as though its size were "4 men". This does nothing to address the actual coordination problem. Then, when units try to stay in command of HQs that can and often are 10m away from each other, to do so you need to pack 6-8 squads within about one command radius. If you are to make any use of cover as well, the result is a giant pile up.

HQs spread means treat HQs as giant sized blocks, not as individual units. Giant sized blocks that generally do not overlap, or at the most do so over only a small portion of their total area. Aka no HQ within 40m of another HQ. This is not the same as no squad within 10m of another squad. The area involved is 16 times as large. And it can override other considerations to a much higher degree, because it only applies to a small portion of units, and those ones not critical for combat.

If you hard coded "no squad within 10m of another", you'd get guys popping out into the 70% street to avoid overstacking inside 10% cover and getting shot to heck. They have to be close to be in command of one HQ. They have to adapt that close to use of cover. So that one has to be "bendable", as well as a much smaller number. But HQ spread just does not. An HQ does not go out of command because it is too far from another HQ. And if HQs are spread, the other problem becomes dramatically easier to solve in practice, with the same code routine. Because a gazillion other units from the other 8 HQs in the same body of woods aren't all jostling like a rugby scrum for the 14% cover.

I am not "dismissing" the difficulty of getting formations, I am telling you how you can do it, more effectively than you are doing it now. Using only the path routines you have now, just "faking" the HQs (only) into thinking places near other HQs are "over-exposed". Once you have that, you will probably be able to turn up the strength with which subordinates are attracted to their HQ (aka, they treat locations near their HQ as "less exposed"). If you did only that now units would pile up too much. But with the HQs "repelling", you can afford to have more subordinate "attraction" and still get more spread.

"AI is all about steps."

The problems to be solved by the lower level, atomic steps are artificially harder for it if you don't provide some of the structure top down. You are making it harder than it needs to be by working mostly from the bottom end. Improve the upper ones and even less than perfect lower ones will give improved overall performance. Applied to the present context, it is much easier to keep in command without stepping into the open if your HQ "owns" 3600 square meters of real estate.

"The problem is that the highest set of priorities takes up the most amount of time"

The way to get time is to get bodies banging it for you. You can get time in industrial quantities that way. It requires some pump priming, in the form of programming time spent making weights tweakable. Then you step back, and time avalanches in, addressing especially the higher reaches of the AI. The implimentation of those tweaks in the lower level is just like the input of terrain or of flags or of "command attraction".

"by the nature of the logic tended to run into each other and cluster. This has NOTHING to do with weights, everything to do with routines."

Path finding will find the same solution over and over if it is presented the same problem. Don't present it the same problem. HQ A "sees" a different map than HQs B C and D. One where those are "obstacles", and A itself is not. Thus the view of the map is different for each of them. Then, any unit subordinate to A sees A's planned waypoint as an objective, like a flag. It does not see B as an objective. Every unit therefore sees a different problem. They will find different solutions. The terrain portion of the input will be the same, yes. But that won't be enough to override the differences forced by HQ spread.

"no amount of weights adjustments will fix that because the routines can only do so much"

I just told you how to make them do a heck of a lot more. Now, when you set it up that way, you can also put dials on all of the above. Terrain, command attraction to own HQ, HQ to HQ repulsion, enemies as obstacles to avoid or flags to attack, etc. You can then impliment custom intermediary AIs are clusters of these parameters, to test. In addition, you can use them for some features of strategic AI variation. E.g. in this flank envelopment plan, enemies are obstacles and HQ-HQ replusion is high.

"the problem does not lie with not understanding the basic concepts. They are so evident that probably most people on this Forum could write them down"

So write them down. It'd be a useful exercise. Then compare the existing performance of the AI. When you write them down, you are performing an essential intermediate step. You are reducing a lot of situational judgment - hard for any computer - to an abstract rule - not hard for a computer.

"weights don't make things happen... routines do."

Weights are elements of routines that involve alternatives. For example, when the path finding routine compares path A that crosses this much open ground, with path B which instead spends that distance in scattered trees, it prefers B. Why? Because it has added up exposure numbers or something similar. It sums things. That is how a routine prefers one thing to another. Some measure of choice A is higher than some measure of choice B. Well, you can make it favor A over B by just hard coding "always do A". But you don't have to. You can just have factor X count twice in the A-B choice instead of once (for example).

"If the routines can't be sucessfully tweaked to your level of pleasure by the guy who wrote them, a guy with a lot of military and programming knowledge, how can you seriously expect a hack end user to come up with better values?"

Because the hack user has a lot more time. It is the same reason quality assurance bangs on things. It is the same reason there are betas. If only 100 control freak grogs spend only 4 hours a week for only 6 months fiddling with AI editor settings, they've already clocked more time on it that Charles would in 5 ordinary man-years doing nothing but. Which he isn't going to do. Look at the posting volume in here. Look at the size of mod libraries. Look at the scenario depot. They'll clock the time if you give them the dials. And they will be telling their own TCs to stay buttoned instead of bothering Charles about it.

"Yes, it is meaningless to the core of this discussion."

It is meaningless in the vastness of eternity too. It is meaningless to me. If it isn't meaningless to you, it ought to be and that is your problem.

"examine your attitude and motivations"

My attitude is the AI can't play very well and I regularly kick it to the curb, and it'd be great if it were stronger, so let's do it. Your attitude, I will let you worry about or describe.

"the suggestions are clueless"

Then you can say why. But they aren't. So you just keep saying they are clueless.

"AND insulting"

Show me anything insulting I've said on this thread. Insulting to whom?

"destroys any chance of having a productive conversation"

No, it really doesn't. I've made this a productive discussion despite your best efforts. By simply staying on the substance and ignoring the atmospherics and gallery playing.

"I am still answering your posts with substance"

In about three cases or instances, yes. Answered on the substance every time, with productive suggestions in every case, with plenty of additional substance as it occurs to me. You haven't conveyed to me any real understanding of the CM AI and how it works - which is not to say I can't see those things myself readily enough. I don't know if you can. If I had to bet, I'd guess you'd have to ask Charles and your basic role here is pure spin. I expect my substantive suggestions will be better received by other ears that yours, and that actually understanding them is not really part of your job. Fine by me, I'm a thick skinned creature and I don't really care one way or the other.

"All of this requires a lot of coding"

You have a procedure now that decides to fire prep fire missions and picks the aim points. Proof - your AI fires prep fires at aim points. QED. It just doesn't pick particularly good aim points (I told you in the last post how to pick a better one), and sometimes fires those prep fires at disproportionate targets.

I know that there is no routine saying in effect "don't waste your shells on that unit - it isn't valuable enough". How do I know this? Because I just told you, it fires prep fires with FOW off at a single DP LMG. There isn't any unit in the game less valuable than a DP LMG, except a rowboat. Ergo, it'll fire at anything. It is a standing joke that you can soak off AI prep fire with a rowboat on a mountaintop. Wherever would you put such a routine? Well, I mentioned that you have a routine that decides to fire prep fires and picks the aim points, right? So, right there, add up what is known to be under the aim point.

"for every 10 things you have identified, there are probably 100 you have not."

Perhaps, at the level of the enemy avoidance behavior of MG main armament wheeled scout cars or something. But if those sorts of things were the only things the AI did wrong you'd have a lot better AI than you have now. Perfection as an aim is laubable, perfection as unattainable as an excuse not to improve anything is not. If you can give an AI editor a hundred dials, knock yourselves out. But you won't need to to get an improved AI. You'd need to spend some programming time on it, to make a general enough structure so the dials tweaked things that actually matter. Then you'd have to wait 3 months or so as grumblers banged on it all. You might have to revisit a bit or two in a patch, even (the horror, the horror). Three months further on you'd have a much better AI on a permanently improving path.

"the AI can't do squat about any of this unless it is specifically programmed to do it."

Locate all friendly HQs. Array the size of tiles. Flag = 1 for all tiles they are in. Flag = 1 for all adjacent tiles. In map representation used for HQ movement, exposure in any tile with flag 1 equals previous exposure +25. Do all HQ movement first. Any tile where an HQ waypoint ends, set a flag there equal to HQ number. After all HQ movement plotted, for ever unit subordinated to an HQ, put a "virtual" small flag at that flagged tile for purposes of calculating the move of those subordinates.

I asked "What is needed to get the AI able to do something approaching this, that is not already there?" The actual answer is the paragraph just above. Your guess was "something that is not practical", "Much easier said than done" and "he could make millions writing AI instead of peanuts making wargames". I'd have thought making wargames was fun, and that anyone might do it because they liked it.

"It is something like "when to sit and shoot" that is off."

"This is probably the one feature of the AI that Charles has spent the most amount of time coding, tweaking"

Probably means you guess? The point is, once again, that it is not obviously better for Charles to spend lots of time tweaking every weight trade off when he might instead tap unlimited tweaking time from others, if he used his much scarcer time to create tweak dials.

"The "obvious and simple" suggestions you make might be obvious, but they aren't simple to code"

Range to nearest known enemy unit (not sound contact) - get from "target next" (it is displayed, so it is stored). By unit type, weapons teams 500m, LMG+Rifle infantry 100m, SMG infantry 50m. If range to nearest less than that -and- ammo remaining in 10 or more -and- %expose is 30 or less (by tile type in) put a (fake) small flag at own location when calculating movement. Notice - if they crawl away the range with open and you'll move. If they go heads down and sight is lost you'll move. If they are right on a large flag you will still go get it. Otherwise you will fire. Put in a dial for it. If it is too strong people will turn it down.

"you aren't telling us anything we didn't know 4 years ago"

I rather doubt that. But regardless, it isn't then and plenty of urgent things to write have since been written. The AI can be put on a permanent improvement track with a few ideas and some dial making programming work. It can't be improved in the slightest by any amount of excuses, apologetics, spin, or abuse of those pointing out its present flaws.

My third possibility is Charles had plenty of other important things to do so he wrote a path-finder core routine and much of the rest is ad hoc. He may see merit in the open dials AI editor idea, or in any subset of the above substantive AI proposals with or without said editor. Or not. If we get a better product as a result, great, that is a win all around.

Link to comment
Share on other sites

"for the ai to really make good choices in this fashion it would have to explore all angles of LOS from every single metre in the setup zone"

Nope. It is enough it is makes that analysis from the center of each tile. Yes, there may be a tiny number of locations that are excellent away from the center of a tile but poor at that center. So what? A good AI will notice the many cases where the center of the tile and most of the other locations in that tile have about the same sight picture.

If you want to tweak it further, you can examine only the tiles that have good cover but read as poor LOS, again. This time with four positions 4m in from the corners. That will deal with things like certain types of buildings, or exact treelines. While requiring only 4 additional evaluations, not one per square meter, and those only for a small fraction of the tiles (since most will be dismissed for lack of cover - infantry - or impassible - vehicle - or will have LOS that evaluates well from tile center).

Similarly, if does not need to draw LOS to all locations. Marginal LOS lines aren't practical ones anyway (targets are exposed long enough, or near misses hit something on the way, etc). And CM is designed to allow sight into most forms of cover, to half a tile width at each end, typically. Tile to tile estimates are good enough and vastly simpler. It is not being used for program evaluation, only for planning. Additional "precision" wouldn't have any practical consequence 99 times out of 100 anyway.

Link to comment
Share on other sites

Steve, i don't understand why you can't honor that JasonC is doing nothing else, than trying to help to make the AI better.

Original posted by Steve:

If the routines can't be sucessfully tweaked to your level of pleasure by the guy who wrote them, a guy with a lot of military and programming knowledge, how can you seriously expect a hack end user to come up with better values?
Link to comment
Share on other sites

Ahh, it's great to see Steve and JasonC trading CIVILIZED posts. Thank you two for keeping antagonistic personalities out. It's obvious that the desire of all engaged here is how to create a BETTER AI.

So, Steve, will CMx2 use an AI which has a memory of previous turns? It seems, based on your posts, that that would be the single greatest improvement on the AI's behavior.

Thank you,

Ken

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...