Jump to content

Bruce70

Members
  • Posts

    394
  • Joined

  • Last visited

    Never

Everything posted by Bruce70

  1. Sorry about the airborne guys, unfortunatly there is always something missed but I have been taking note and listed all of these suggestions for a possible SC2 If SC2 is at a different scale I don't have a problem with that. We have heard nothing about SC2 and I don't even think Hubert has had time to think to much about it. I believe Husky was saying that airborne has been ruled out for SC1 and perhaps (though Hubert wasn't specific) for this scale of game. I don't expect historical accuracy in this style of game but airborne units at this scale is IMO historically absurd. But if Hubert does decide to add it that's his business but please make it an option - preferable with the default being NO paratroops. Incidentally I think the default for FOW should be ON, its not much of a game with it off.
  2. It is totally bejond me how a neural network (as in computer science neural network today) should play CM. How will you turn terrain, expected enemy forces, objectives, desires (like security) into suitable input? You end up programming input composition programs for each of these aspect (and many more) that are each more complicated than a complete programmed opponent. How will you teach it about different weapons in first place, without actually doing the crossproduct of all imaginable forces and training each one? The main quality that a NN has that a rule-based system doesn't is it's generalisation ability. While some of you grog's probably know the exact situation in which every unit has and should be used I personally just have a quick look at the stats and then generalise based on the units do know. So first of all a NN need not be trained on every unit. Having said that it would not be a big deal to do that. My experience with NN is very limited, but I closly observed how a friend of mine turned video input (of a machine part) into a quality assurance optical analysis program based on a NN. That thing had a fraction of things to consider and it was no problem coming up with good/bad sample input (pictures of know good or know bad pieces), but still he spent like two years on it. It was pretty cool, though, a real chin-dropper. The example you give of visual processing is a very difficult one, arguably as difficult as each aspect of CM. Perhaps you could ask your friend why he didn't just write rules for the system since a rule-based system can handle more complicated problems (like CM) better than a NN. [sarcasm, sorry] It is also a significantly different type of problem, so it isn't a fair comparison. Anyone has any concrete ideas? OK here are some concrete ideas. The task is broken up into smaller, heirarchically organised pieces. At the top level the AI would receive as input a very low res version of the map (maybe a 5x5 grid) so that it can get a general feel of the map, like "the left flank is elevated", "the right flank is heavily wooded". It would also have some info about its own forces, nothing specific just something like 2 inf companies, 4 MBTs, etc. It would also make an estimate of the oppositions forces. This level AI would probably come in two flavours, defence and attack. The defence version would be responsible for setting the basics of a defensive position. This would be a matter of assigning units (company level probably) to the 5x5 grid squares. The attack AI would give the opposition force estimate to the defense AI and ask it to work out a probable set up. The atack AI would then use this info to choose a general attack strategy (e.g. Attack with 1 company on the right, pin from the hill on the left with the other company). The would take the form of giving orders to each unit like start at (1,1), move to (1,3) and supress (2,4). This is something that you (as a huma assess at the beginning of a game and then you stick to that decision until somthing happen to make you reassess. So this level AI can be quite slow since it is called upon rarely. This is the advantage of having a heirarchical system. You could say that you reassess every 5 turns or maybe when you have spotted the enemy or whatever. So how do you train a NN to do that? Well basically you don't. You train the NN to evaluate each possibility based on the info available. You then get a valuation for each promising possibility and make a random choice based on the valuations. If you win the game then you train the network to increase the value of that strategy under those conditions, if you lose you do the opposite. This is a simplified explaination of reinforcement learning. A couple of levels down would be the Platoon level AI, which would have received orders from the higher AI. It would receive a higher res map but probably of only a portion of the map, so perhaps still just 5x5. It would have as input known or expected positions of enemies, the current positions of units it is in charge of and its current orders. Based on this it would perhaps choose a formation (2 up, 1 back?) and plot a course for itself, taking into consideration cover, the final objective etc. Each section/squad would have a still higher res but more restricted map and would loosely try to maintain its position in the formation while choosing a route that gives it good cover at its higher resolution. OK hopefully you get the general idea, but this was just off the top of my head so I could probably go back and pick it apart and come up with something better. The key points are the heirarchical structure and the use of a NN for it's generalisation and reinforcement learning for training the NN. Just as a possible addition, someone mentioned something about crushing the hopes and dreams of a human opponent. Well I haven't thought too much about these emotions but someone (Vico I think it was) designed a very interesting NN that had an anxiety level. Basically this worked by increasing the anxiety of the network every time it got negative feedback and reducing it after positive feedback. The anxiety level effected the training rate of the network. So a network that continually got -ve feedback (eg losing the game or losing men) will increasingly try more and more radical solutions to its nightmare. Eventually one of these might actually result in +ve feedback and with the now very high training rate this will receive a lot of reinforcement while at the same time lowering the anxiety, so the AI will blindly follow this "solution" for some time even if it is being -vely reinforced. Sound like going berserk to anyone? Of course we humans have a system like this (which is what Vico based it on) for a reason, when we don't go berserk it actually works very well. Anyway, human emotions are understood well enough that it isn't impossible to work these into a NN. Oh well, another day another long post.
  3. I think it will probably come down to research. The degree to which your opponent invests in AT tech will no doubt influence your tank tactics.
  4. I have used it before and do like it (although I'm not that fond of the grass texture) so I guess I'll install that first, should still have it on disk somewhere, thanks.
  5. Does Madmatt's pack work with CMMOS?
  6. I can't declare war on my neighbors Go to the war map, click on a flag and then click "declare war"
  7. I only recently reinstalled CMBO and although I'm not what you guys would call a "mod slut" I do like to apply some basic mods. Well I hadn't checked out the list at CM HQ for a while and was astounded to see the number of mods and mod managers now available, I just didn't know where to start. So, which is the best mod manager? Is Madmatts mod pack still the best place to start? And what other mods (particularly terrain, grass and sound) are really worth getting? You probably get this question a thousand times a month so I could just read the forum but I feel lazy today.
  8. Maybe it would be possible to create a very good AI also for CM, but there would be all the knowledge about neuronal-networks necessary and if we take the enormous numbers of variables into account, the calculating of a turn would take at least 1 day even with the fastest CPUs today. I don't know what you are basing this on but it would take an enormous NN to chew up one days worth of CPU time. To train a single backprop NN for CM would take a VERY long time but actually running it wouldn't be that bad. Also as I said backprop is not the only NN algorithm and you wouldn't use a single NN any more than you would tell a battalion commander to give orders to every squad under his command. You break each part of the task up into very small pieces, no part of the AI should be dealing with "enormous numbers of variables". [ July 25, 2002, 07:52 PM: Message edited by: Bruce70 ]
  9. Paratroopers are no doubt the most handsome, intelligent, erudite, suave, sophisticated, highly trained and well equipped (personally and professionally) troops available Ahhh, so thats the real reason you're[sic] no longer a para Husky, just didn't fit the ummm... bill, shall we say?
  10. > Will there also be a function to zoom the map? No. Damn, that is the one thing I was really hoping for. I'm not kidding either.
  11. It took me so long to write the last post that I missed a couple of new posts. bayesian techniques These are not easy to develop and even harder to debug. They are an improvement over just plain, straight rules but I personally put them in the rule-based category. I may be asking other people to have an open mind, but that doesn't mean I have to that stupid "artificial intelligence" term "machine learning" better describes my area of expertise but English speaking people are more familiar with AI. Anyway, the question here is, when will be see a programmed opponent which can do a decent terrain and opponent force analysis and can constrain its actions with a set of "don't try that at home" rules which still leave enough room to come out with something the machine can execute (instead of throwing up with "no solution"). Yes rules are important, miltary officers follow rules (SOPs) as well as use their own initiative. IMO the answer to your question is yes and this would best be done by a hybrid of ML techniques and rule-base. and there is lots of opportunity to screw up Yes there is and we shouldn't be too hard on AI when it does, that is human. The goal is not to develop an unbeatable AI in my opinion. But the human player learns and adapts within the game, and that is very hard to do in a program. Not really, the hard part is, as you stated earlier, analysing the terrain etc. And the hardest part of that is deciding what information is relevant, often we can't even describe why we ourselves have made a certain decision but giving irrevelant info to the AI is almost always counter-productive. Another rant, oh well.
  12. OK first let me clear a couple of things up. I am not disatisfied with the CM AI, I think they have done a great job and have not once complained about it. What I have a problem with is the industry complacency in general. "You can't make a decent AI so lets not spend much time on it. A human opponent is always better anyway". I suspect a large part of the problem is that it is difficult to market AI, its much easier to market pretty graphics. Now to answer some specific responses. Can you name a game that really had the AI you talk of? No and that is exactly my point. However I have been quite impressed by the AI in AA (I have only played the demo though) I've been a gamer for a long time and have yet to see any AI give me the challenge that a competent human opponent can give me. Neither have I, that is my point again. If getting this AI that can "generalize and improvise" is as easy as you say, why haven't AI game programmers not done so? I didn't say it was easy, a game developer would have to invest quite a bit in R&D, however after the initial effort was made the resultant AI would be much easier to tweak or even alter completely for the next game. I'm not sure why they haven't done so, perhaps its the initial R&D investment, perhaps its a shortage of good AI developers, but I think a large part of it is customer complacency, which is what I am complaining about. If you are really in the know, perhaps you can let Charles know how such an AI can be easily programmed. I am sure Charles keeps up to date with current AI research. But as I said its not easy and I'm sure there are valid reasons for the CM AI being the way it is. When I beat the computer I just beat a machine, a bunch of mathematical formulas. It's not the same as crushing the hopes and dreams of a human player (game-wise speaking). You would be surprised how human like a good AI can be, one of the keys to making a good AI is giving it "hopes and dreams". As for the "bunch of mathematical formulas" do you really think you brain is anything more than a biological machine? always predictable AI need not be predictable, rule based AI often is but rule-based AI is not the way to go. I'm also more than convinced that a lot of "bad" AI behaviour in the attack role could be atleast partially be overcome by more options for the Scenario Designer (Avenues of Attack, triggers and the like..). This would be available for a predictable price, while investing in improved "AI" is a very risky and expensive business with only minor results Investing in AI may be risky from a game developers POV but the results need not be minor. So BTS will no option be left, then to improve control for Scenario Designers. I do not believe that is the best option although a lot of people who depend on Scenario/Level design for an income would probably disagree with me. Pathfinding, like AI development, has it's own development. Programmers can, depending on the project, spend a lot of time developing a new algorithm; some time tweaking an existing algorithm; or simply transplant one that has already been developed somewhere else. They can require a wealth of mathematical knowledge and insight and can be VERY time-consuming. Pathfinding is not currently an AI issue but I believe (and my PhD relies on it) that an AI approach to pathfinding would be significantly less CPU intensive than a number crunching approach. I'm not trying to cut anybody down, but hopefully to increase the appreciation that all of you already have for all the hard work these guys do to bring you good games to play and how gifted they are I agree whole-heartedly You say you're an AI researcher. Does that mean you do code? If so, how far is AI coding these days? To what extent are neural networks used and is there anything else out there these days? Yes I code, but that is not the major part of any research. My research is in the Machine Learning field and NNs are used extensively though usually in combination with other techniques like reinforcement learning. NNs have a bad name because of the backpropagation algorithm, a large part of the industry are only familiar with this technique which is very slow and CPU intensive. I would never suggest using a multi-layer backprop network for a computer game, especially a real-time one. Reinforcement learning is probably most appliciable to computer gaming, do a search for "temporal difference learning". Tesauro wrote a very good backgammon AI using TD-learning. A word of caution though, writing AI code isn't as simple as choosing an algorithm. If it was that simple then every game would have perfect pathfinding. Yes, but look at how long it took, and how much computing power was used, to create a rule-based AI that could beat a human in chess. Yes this is the problem, AI programmers look at the best chess AI, they note that it is rule-based, then they note how long it took to develope. From this they conclude that rule-based AI is best and that they can't possibly hope to develop something that competent for a game as complex as CM (for example). Just in case you have forgotten the beginning of this post, I am not having a go at the AI in CM and applaud the CM AI team for the job they have done. I do not want them to become complacent however and ask all game developers to keep an open mind concerning AI alternatives. [Edited just to say that I am sorry for hijacking this thread and I will try and find a way off this soap box ASAP] [ July 24, 2002, 10:40 PM: Message edited by: Bruce70 ]
  13. OK I appreciate all the difficulties that the average developer has with AI and I perosnally beleive that the AI for CMBO is better than average. But I am sick of reading comments like this: Anyway, if you want a real challenge, you need a human opponent. No matter how great the AI, you really can't beat an unpredictable, human opponent. (not having a go at you in particular Commissar) As an AI researcher I have to say that this simply isn't true. To a certain extent it is true of typical rule based AI but it certainly isn't true of AI in general. Rule based AI has been terrfic for relatively simple games like chess and so game developers have assumed, incorrectly, that rule-based must be the way-to-go in general. However you simply can't cover all the possible states of a complex game like CM, you need an AI that can generalise and improvise. AI's do exist that can do this and they do not require a lot of development time or even too much CPU time (for a turn based game like CM). The sooner we stop saying "you can't expect too much from the AI" the sooner we will start seeing some decent AI. Sorry for the rant.
  14. Las Vegas makes its living from people who think that chance plays fair, evenly, across a random series of attempts. Actually Las Vegas makes its living from the fact that chance does play fair - given enough trials. But you are right, the few strange results that have been experienced are nothing to worry about unless they can be consistently reproduced. BTW I have never experienced 2 or more advances in one turn, has anybody else (apart from those already mentioned)?
  15. I also agree, no paratroops at this scale. I wouldn't mind seeing SC2 at a slightly smaller scale though, just for variety, not because I think the scale of SC is too large. Actually I did originally think the scale was too large but it really grows on you. [ July 23, 2002, 09:51 PM: Message edited by: Bruce70 ]
  16. I played a few games last night. Lowest difficulty, capture low countries in one turn, then the next turn disband enough units to give me 2000 MPPs to put into research. By spending all these points immediately I have 8 points invested for 14 turns. Note that there are only 14 turns remaining in the demo not 17 as was stated earlier. Using a variety of investment strategies I was not able to produce any unusual results. I guess you only pay attention to the research when something unusual happens. The 14 turn limit changes the table I produced earlier to the following expectations with 4 points invested: advances / %chance 0 / 4% 1 / 15% 2 / 25% 3 / 25% 4 / 17% >4 / 14% So there is about a 50% chance you will get 3 or more advances, which makes Bills experiences seem a little more acceptable but still unlucky enough to make him an excellent PBEM partner. [ July 23, 2002, 07:53 PM: Message edited by: Bruce70 ]
  17. The extra cost only reflects the extra unit strength gained (if applicable). eg at level 1 a sub has a max strength of 11 points compared to 10 for a level 0 sub and 358 is 10% more than 325 You usually get combat bonuses in addition to the max strength bonus so the research is worth it even without industrial tech. But I agree with you, industrial tech is nice.
  18. Just to give something concrete to go on... Here is what you should expect for 4 research points over 17 turns: 0 advances - 2% of the time 1 advance - 10% 2 - 19% 3 - 24% 4 - 21% >4 - 24% And just to add something else to the mix. I played on the lowest level as the Germans (and disbanded most of my corps) so that I could experiment with research and on two occasions invested 5 points in industrial tech and 5 in tanks. On both occasions I received 3 tech levels of industrial tech but 0 of tanks. The chance of seeing 0 advances over 30 odd turns (2 games) with 5 points invested is 0.02% So maybe research is being calculated differently for different techs??? While anything is possible it does seem that something weird is going on here.
  19. OK That seems reasonable. I guess it just seems slower when you are waiting for them. Excellent, I didn't know that. Thanks.
  20. I just downloaded the TacOps3 demo and plan on buying TacOps4 when it is released. I have only played a couple of games but it seems that while infantry can be suppressed they never break or surrender and seem to fight to the last man. Have I misinterpereted what I have seen and has infantry modelling been changed for TacOps4? Also the movement rate of (unsupressed) infantry seems a bit slow but I didn't actually try to measure it so it might have just been an impression. For the sake of my future planning what is the movement rate for infantry? or is this in the unit info? [ July 21, 2002, 08:55 PM: Message edited by: Bruce70 ]
  21. If it could find its way in at some stage I would really appreciate it. Thankyou for your prompt replies.
  22. Is it true that you can't play a multiplayer game against an AI opponent, even with an umpire?
  23. BTW a 4 player LAN session could be fun with 3 players vs the AI but with the umpire adding a little spice to the AI moves. And I realise the primary motivation for this software isn't fun but it doesn't hurt does it?
×
×
  • Create New...