Jump to content

W1ll1am

Members
  • Posts

    12
  • Joined

  • Last visited

Posts posted by W1ll1am

  1. 1 hour ago, Centurian52 said:

    I don't think Battlefront has that kind of AI. The kind of AI that you feed training data to is called a neural network (because the data structures involved have a passing resemblance to a very basic understanding of how biological neurons work). A well designed neural network, fed a sufficient amount of training data, can do things like accurately identify hand-written letters, or guess which pictures are and are not pictures of bees. Very large neural networks, which are fed massive amounts of training data, are called deep-learning networks. Some well known deep-learning networks have gotten very good at specific tasks such as, in the case of chatGPT, convincingly mimicking human language.

    A neural network would probably not be the most efficient way to create a good wargaming AI. A neural net can't think multiple steps into the future. It can only make the decision with the lowest cost at this particular moment ("cost" in this context refers to the mathematical punishment/reward system that was used to train the AI). That works just fine for things like chatGPT, since all it takes to convincingly mimic human language is to respond appropriately to the most recent prompt. No memory of past prompts nor anticipation of future prompts is required. But a good tactical AI needs to do more than just anticipate the immediate consequences of a decision. It needs to be able to think several steps into the future. It needs to be able to plan. Neural networks (as they exist today) can't plan*.

    A better approach might be to do something like the General Staff: Black Powder AI. It analyzes the battlefield (using pre-programmed methods (if you watch the video the narrator mentions a spanning-tree algorithm used to calculate frontages), not training data), breaks the situation down into a series of logical statements, and then deduces which courses of action it should take. In the video I linked the General Staff AI was able to identify an exposed flank and assign a unit to conduct a flank attack. For something like Combat Mission the AI would, for example, need to have a concept of fire-superiority. It would need to recognize whether or not it had fire-superiority, and know not to attempt to advance without fire-superiority (that would stop a lot of AI lemming charges).

    https://www.youtube.com/watch?v=Y0K8DnS414o

    *Which is not to say that neural nets don't have some really exciting possible applications. They can be trained to be extremely good at recognizing objects, so they are ideal for tasks such as spotting and identifying targets. And they will do that way faster than any human ever could. But they will have an error rate, and they will be stumped by any object that wasn't in their training data, so we'll still want a human in the loop to approve/disapprove targeting decisions for at least the next few years until all the kinks are worked out. You could also send such systems into areas that are known to contain no friendly or neutral targets, allowing it to engage targets without waiting for human approval.

    Another option (next to DL and conventional terrain analysis, as used in General Staff) is reinforcement learning. It has a more narrow application, but is easier to fit to a game like Combat Mission.
    Here is an example, as shown at the ConnectionsUS 2020 on-line conference: 'Course of Action Generation with ML and a COTS Wargame', using the Flashpoint Campaigns game.
    https://drive.google.com/file/d/1C9nfWwwlsTMkNN-au6j5UrTfDMr9v4KF/view?usp=drive_link

  2. I've enjoyed reading the (freely available) Official History of the Canadian Army in the Second World War, Vol III The Victory Campaign: The Operations in Northwest Europe, 1944-45. Get it at http://www.cmp-cpm.forces.gc.ca/dhh-dhp/his/oh-ho/detail-eng.asp?BfBookLang=1&BfId=29.

    Also recommend Brian A. Reid's 'No holding back' covering Operation Totalize. Goes into more detail describing operations, equipment, the decision making process, and what went right and wrong (in hindsight).

  3. Coding to recognise that a building is better cover than a wall, and allowing troops to move a certain small distance to get into better cover if under heavy fire, is one thing. A pretty simple thing. Is incoming too high for this cover? If it is, is there better cover within 10 meters? If so, get there now. If not, stay put. You don't even need to know where the shooting is coming from

    I can think of a number of edge cases complicating such a 'simply move to better cover' decision (having done AI for tactical FPS games):

    - the 'better cover' may be occupied by hostiles

    - the 'better cover' may be in the line-of-fire of even more dangerous threats

    - traveling the path to this cover position may be get the squad killed

    - the path to this cover position might cross friendly lines of fire (not an issue in CMSF though) or friendly lanes of movement

    - the 'better cover' may be overloaded already with friendlies or designated for use by other troops on the move

    All these cases can be checked for, but these checks typically involve a good amount of additional code, a large set of test-cases and the most expensive computation in the game: line-of-fire checks.

    Don't misunderstand me: I'm looking forward to play with and against AI capable of everything you describe. Sadly, game AI doesn't automatically become easy because something looks obvious.

  4. Hi,

    with CMSF 1.10, CMSF fails to start a mission with the message "Your graphics hardware is out of memory". It didn't do this for 1.08. It continues to do so even when dropping down the resolution to 1024x768, fastest 3d models, fastest textures, aa/ms off, graphics driver set to single gpu performance. Same problem for 1.08 save games and new Marines campaign.

    Setup:

    Core2Duo E6600, 3GB ram, NVidia 8800GTS 640MB with dual-monitor 1600x1200 screens, OS WinXP32SP2, graphics drivers 177.83.

    Tried suggested driver settings "Threaded optimizations off, Extension limit off, Triple buffering on" without improvement.

    Don't experience problems playing Armed Assault or Enemy Territories: Quake Wars (which might be an OpenGL game as well).

    Any advice? Anything I can do to hunt down the problem?

  5. Originally posted by Battlefront.com:

    As I've said, there is a LONG history of hardcore wargamers rejecting wargames because of their looks instead of their merits. Chris Crawford's "East Front", Grigsby's "Kampfgruppe", Kroger's "Steel Panthers", Zabalaoui's "Close Combat", and of course our "Combat Mission" are all examples that come to mind.

    Just curious (and off-topic): Which game by Norm Kroger did you intend to refer to? Steel Panthers is by Grigsby, as you're well aware.
  6. iLikeThisGame,

    another way to tackle LOS tables is to store for each spot, for several directions (or pizza slices) from that spot the distance to which LOS extends in the best case.

    With this, you establish whether's a likely LOS/LOF for a pair of units u1 and u2 as follows:

    - los(spot(u1), dir(u2, u1)) >= dist(u1, u2) && los(spot(u2), dir(u1, u2)) >= dist(u1, u2)

    If this expression returns false, you're done. Otherwise, you perform a LOS check (and take into account all dynamics such as vehicles, smoke, vegetation, posture, etc.). Total: 50x50x2 table lookups (worst-case).

    The downside of such an approach (and the downside of LOS tables in general) is that it is expensive to repair these tables when obstacles (buildings, etc.) are removed.

    William

    ref: http://www.cgf-ai.com/slides_gdc2005.html, slide 38

  7. Steve,

    Originally posted by Battlefront.com:

    It would take a LOT more than 4 times the computing resources (do not forget about RAM... it's a huge limiting factor) because the hardware intensive features of a wargame (LOS/LOF and Pathfinding) have exponential needs to increase fidelity. I'm a math moron, but here's a way to think about it:

    Picture a grid of 20x20m squares. You want to travel 100m as the crow flies to another spot on the grid. The Pathing system, no matter how efficient it is, has to check all sorts of possible combinations of how to get from A to B. It could be, for example, that the ONLY way to get to where you want to go is to go backwards, then around something else. Your unit may have to go through 20 squares, to travel the equivalent distance of 5 (as the crow flies). Now reduce the size of the grid to 10x10m squares. Are there twice as many different ways to get from A to B, or now (because of the more refined grid) far more than that? The answer is far more than twice as many possibilities. A math guy will have to tell you what the actual number is, because I am a math moron :D

    I'm not a math guy but a game AI developer who picked up the game to check-out your AI and improve my understanding of modern combined-arms tactics. Thusfar, I've really enjoyed your approach to 'hands-off' tactical AI, since I hate micro-managing units. (And I love zooming in on the Javelin team and seeing them take down a bunker).

    FYI, in theory the cost of pathfinding increases with a factor between 2 and 4.6 for a factor 2 increase in resolution, and not more than that. There is no exponential increase in costs.

    This is best understood as follows (assuming some kind of A* algorithm is used):

    - the terrain surface (number of action spots) explored by the A* algorithm for a certain path depends primarily on the cost-function and heuristic used, and will be same regardless of the resolution. In the worst case, this is the full map and contains 4 times as many spots if the resolution is doubled. In the best case, the path is a straigth line, and contains 2 times as many spots.

    - the 'open list' in A* (best interpreted as the out-line of the terrain surface being explored by the algorithm) will be about twice as large, for 2 to 4 times as long a search. Manipulation of the open list is O(log(size of open list)). When 'open list' manipulation is your bottleneck, the cost increase is (worst-case) (4*N * log(2*N)) / (N * log(N)), which is between 4.4 and 4.6 for N > 100 (closer to 4.4 for larger N).

    For a change from 8m action spots to 1m action spots, the increase in costs can be computed along these lines (somewhere between 8 and 90). However, in those situations (basically "FPS situations"), you're better off doing two path finding calls; a first at (say) 8m resolution, and a second at 1m resolution but constrained to those 8m spots (and neighboring spots) found in the first search. That's probably in the order of 2 to 5 times as expensive as a single 8m resolution search, and with very good worst-case behavior.

    Oh, but it gets worse :)

    Currently each Team can only occupy one square. It can stretch out when moving between two, but as far as pathfinding goes there was only one calculation done for all those individual guys within the Team. Meaning, 5 guys may occupy two or more Action Spots at a given time while moving, but they will eventually get to the same spot from the same spot. If the grid is reduced this means the "units" have to be reduced to match the grid size. Otherwise you'll have 5-9 guys all in one square of whatever the new grid size is (let's say 1x1m). Well, since you can't have 9 guys all standing in one square meter you MUST break the unit up into, I dunno, 9 individual units. So you reduced the terrain resolution by a factor of 8, but potentially increased the number of units by a factor of 9. And guess what? Each one will now have to calculate its own LOS and paths.

    Now, take the two concepts above and combine them. You get an explosion of LOS/LOF and Pathing calculations for each individual unit, then you increase the number of units by a huge number. Er... hopefully the results of that are obvious for you all to see. Major increases in computing resources are needed for even a modest increase in fidelity.

    For path-finding, it should be sufficient to find a path for the unit's leader, and have the other members move into positions based on steering behaviors. This costs less, and won't suffer from sub-optimization where one of the team's soldiers breaks cohesion because he himself is able to find a marginally cheaper path on the other side of a building.

    For grid reduction, LOS probably is much bigger problem (in the games I worked on, LOS/LOF consumed much more CPU than pathfinding). In addition to the problems you mention, an increase in resolution is likely to give a decrease in robustness for picking cover and attack positions. If the attack or cover position is picked based on the 1m action spot of the (known) opposing units, a slight displacement of those units is more likely to invalidate the cover or attack position (than when using 8m action spots).

    Again, a hybrid approach with action spots at two levels of detail (8m action spots consisting of 64 1m spots) may give you the best-of-both worlds. Only when LOF/LOS checks between 8m action spots are inconclusive, you'd need to fall back on more detailed LOF/LOS checks. In the (FPS) games I worked on, we employed a mix of action spot sizes (large sizes when terrain geometry was trivial, and small sizes when geometry was complex), we pre-computed part of the LOS/LOF checks in a compressed table, and only performed ray casts when the table's answer was inconclusive or would not be representative (there were dynamic obstacles present or absent between the unit and it's opponents). Because of these savings, we were be able to do path finding with LOF checks per position ("tactical path-finding").

    As a reference, only a few FPS games today come close to the vehicles mix and unit counts as exhibited in CMSF (think Battlefield series, and Bohemia Interactive's Armed Assault [recommended, it's the Operation Flashpoint successor]). Both these games seem to use >1m tiles or exhibit sloppy cover / attack position picking.

    Kind regards, thanks for the Javelins,

    looking forward to more of your team's games (and patches).

    William

×
×
  • Create New...