Jump to content

Irratic Framerate Issue


Hister

Recommended Posts

I think this discussion is showing how difficult it is to pin down a performance issue.  It's never as easy as "the game code is the problem" or "the CPU is the problem", etc.  It's always some combination of things, including stuff that isn't normally looked at on the mother board.  Some thoughts on the conversation since I last checked in:

It is absolutely impossible to compare the performance of one game against the performance of CM when trying to troubleshoot framerate issues in CM.  The reason is that every single game hits different parts of the hardware differently.  On top of that, few use OpenGL and that means those games might not even use the same hardware AT ALL or to the same extent that CM does.

The lesson here is that expectations should be pegged to what other CM players experience, not what the guy down the street gets with a first person shooter game.  It's a comparison that doesn't help at all.

For sure CM doesn't run as fast or as smoothly as the AAA games out there.  Those games probably spend more money on optimizing for hardware than we spend in a year on everything we do.  Hell, probably 2 years! 

Those AAA games also have the luxury of cutting out something if they aren't happy with the performance.  For example, those AAA games would never put their equivalent of BP1 Op Linet II in release and would probably put some restrictions so that nobody could make something that.  We don't put on restrictions, so the "worst" of CM has no equal to the "worst" in another game because their worst is far, far less stressful on the system than ours is.

C3K pointed out something very important to keep in mind... Combat Mission pushes around way, way, way, WAY more data than the AAA games you see out there.  We have hundreds, if not thousands, of unit objects in good sized scenarios which all demand their own graphics, data, AI, etc. resources.  CM also uses detailed physics designed to simulate real life, not quick/dirty approaches that are there for effect and not simulation.  This not only affects load times, but it also means any sort of CPU or I/O bottlenecks are going to be pretty noticeable with CM vs. a AAA game.

So on and so forth.

For Combat Mission, the average scenario should run between 25 and 30fps most of the time for most people.  Big bastard scenarios are the ones that are likely to crash the FPS for one person and not do so for another person.  Bottlenecks have a sort of compounding effect that gets disproportionally worse as the strain goes up.

Just some more thoughts :)

Steve

Link to comment
Share on other sites

7 hours ago, Hister said:

BP1 The Copse scenario:

 

The 1600x1050 monitor gave me 24 frames in that same camera position with all the same ingame settings. 

The 1280x768 monitor gave me 29 frames in that same camera position with all the same ingame settings. 

Nearly a 20% speed increase by going from 1600 to 1280.  How do either of these compare with your FPS on your widescreen?

7 hours ago, Hister said:

 

BP1 Op Linet II a scenario:

The 1600x1050 monitor gave me 7 frames in that same camera position with all the same ingame settings. 

The 1280x768 monitor gave me 7 frames in that same camera position with all the same ingame settings. 

This indicates that the scenario is overwhelming some part, or parts, of your system.  You mentioned your CPU, but that's likely not it.  As you've pointed out, your CPU is above specs (and you now understand more about cores and why they don't matter).  So it's probably buss speed or a sub component on your motherboard that was made by the "lowest bidder" instead of "highest standards".  This can be like driving a Ferrari with a plugged air filter.  Doesn't matter what the specs of that Ferrari say, or how much it costs, if the air filter is plugged I could probably accelerate better in my Honda.  Computers are like so many other things -> only as fast as their weakest component.  And even then that's highly task specific in terms of impact.

I remember back in the old days of hardware wars between the various PC makers.  Reviewers would take PhotoShop and run the same tasks on each machine to see how each did.  One task would be relatively the same for each, another task would be dramatically different.  If any of you remember the days when CPUs needed dedicated "math coprocessors" to do heavy floating point math, the systems without them generally were fine for most daily activities but were horrid for things like PhotoShop.

7 hours ago, Hister said:

I re-read all the forum topics regarding the performance issues people are experiencing and discovered a dissonance between what developers deem acceptable game performance and what certain users see as such (not those over 70 who can't spot a difference between 15 and 60 frames on their screens, lol). Knowing that 20 to 30 frames is what you must expect from the game makes one more at ease. When I'll get older i might also not spot a difference in screen fluidity and by then I guess I'll be fully satisfied with the way CM games perform. ;)     

For sure CM's fluidity and overall performance is not great compared to the AAA games. As I explained in my previous post, there are very good and understandable reasons for that.  Setting expectations for CM within what CM can do is really the best way to go.  If most people say "I get 25fps for X scenario" then you're not likely to get 40fps and wanting it to be so won't make it happen.

Steve

Link to comment
Share on other sites

33 minutes ago, Schrullenhaft said:

As far as I'm aware the 'minimum' and 'recommended' specifications haven't really been changed/updated since CMSF in 2007. The engine has been updated significantly since then with regards to graphics. This is especially true of the 'version 2' engine that actually reduced the 3D model complexity and replaced it with bump-mapping to provide the missing model detail - with the idea of improving video performance. However there may be more calculations for the CPU to perform with the newer engines (more complex LOS/LOF, etc.).

Ha, update the requirements then to match the real demands of the game would be my snob answer (if they really need updating that is). ;) 

35 minutes ago, Schrullenhaft said:

I would generally assume that an AMD FX 6300 should have been a decent performer for CM games. Admittedly newer Intel CPUs will have an advantage over the AMD FX series since they execute more 'instructions per clock' (IPC) resulting in better single-core performance (comparing at the same clock speed). To my knowledge the only code that is 'multi-core/thread' in CM is the scenario loading process. It was one of the few places in the code that could easily benefit from more cores/threads without having to significantly change the engine. Interestingly AMD GPUs suffer a huge hit in the scenario loading process for some reason. I can only assume that there is some weakness in the AMD OpenGL video drivers that is getting hit hard by the loading process (perhaps this is one of those processes that has to run on the CPU instead of the GPU for AMD as Steve mentioned earlier). Running a Nvidia GPU on the same system could potentially result in load times that are 2 - 3 times faster. Of course the issue in this thread isn't scenario loading times.

Ok so then the AMD processors are really not the best choice when it comes to the CM games. Did you consider adding a warning note next to the CM products that users with AMD CPU's are more prone to encounter less smooth gameplay or turn it in a positive way and say something like game is optimized for Intel CPU users? Might hurt your sales, I know but fair is fair.  

 

41 minutes ago, Schrullenhaft said:

As others have pointed out, you're running a huge scenario for comparison; something that tends to slow down even the fastest machines. This may be something of a CPU limitation/bottleneck, but I have no idea what is possibly being 'calculated' during screen movement that could result in lower FPS. In general CM video performance is slowed down by the number of units on screen, the number of buildings, the amount of trees and the complexity of the map (the number and magnitude of elevations). With a larger horizontal resolution ('2560') than the average '1980' your 'window' is much larger for what needs to be rendered on screen. The CM graphics engine tends to cut down on textures (their number and detail) after a certain range in order to maintain 'performance'. I don't know what the algorithm is for these calculations, but they may be a bit dated in their assumptions of GPU performance and memory capacity. However it is quite probable that those 'assumptions' may not be easily changed with the current engine. There are also fewer LODs for each model that other major games may have more of ('art' is expensive), which could result in somewhat smoother framerates if there were more LOD level models/textures.

Thank you for taking the time to explain all of this. I already knew what the gist of the game is so I'm not trying to compare it to other AAA titIes - it's a special gem with all the quirks that come with it. Since you mentioned my ultrawide monitor I also provided a test with two smaller screen monitors in one of my previous posts and there was no difference in that huge scenario - frames remained at 7 or 8 no matter the level of detail I chose on either monitor so exactly the same as on my ultra-wide one. 

 

47 minutes ago, Schrullenhaft said:

I have an FX 8320 which also runs at 3.5GHz (it just has 8 cores instead of 6 like the FX 6300 has) and a GTX 660Ti that I could test (on a cheap Asrock motherboard with an AMD 880 chipset). I'll have to install CMBN and see what I get, but I suspect the performance will be pretty much similar to what you are seeing already.

Well if you do take the time to fire up the game (the scenario in question is from battle pack one) and check the framerates for that particular scenario that would be very interesting to see. Thank you! 

Link to comment
Share on other sites

5 minutes ago, Hister said:

Ha, update the requirements then to match the real demands of the game would be my snob answer (if they really need updating that is). ;) 

Ok so then the AMD processors are really not the best choice when it comes to the CM games. Did you consider adding a warning note next to the CM products that users with AMD CPU's are more prone to encounter less smooth gameplay or turn it in a positive way and say something like game is optimized for Intel CPU users? Might hurt your sales, I know but fair is fair.  

 

Thank you for taking the time to explain all of this. I already knew what the gist of the game is so I'm not trying to compare it to other AAA titIes - it's a special gem with all the quirks that come with it. Since you mentioned my ultrawide monitor I also provided a test with two smaller screen monitors in one of my previous posts and there was no difference in that huge scenario - frames remained at 7 or 8 no matter the level of detail I chose on either monitor so exactly the same as on my ultra-wide one. 

 

Well if you do take the time to fire up the game (the scenario in question is from battle pack one) and check the framerates for that particular scenario that would be very interesting to see. Thank you! 

You keep making a big mistake and making it over and over again no matter how many different people try to explain things to you... this is not a simple A + B = C situation.  The longer you hold onto that sort of bad logic the harder it will be to help you.  Or want to help you.

An example is with the AMD disclaimer you think we should put on our website.  Schrully didn't say ANYTHING OF THE SORT.  What he said is that in a lab the Intel chips have a theoretical advantage over an AMD chip.  That difference likely has absolutely nothing to do with the slowdowns you are seeing.  So no need for a disclaimer for something that isn't relevant and it is distracting for you to suggest that it is.

Screen resolution is probably the single biggest cause of problems for you.  Your own tests pretty much prove it.  There's still something else on your system that is causing poor performance for big scenarios, but your CPU is unlikely the cause of it.  As I and others have said, it's probably something or some combination of things which are bottle necking what CM can push onto your screen.  For some scenarios the bottleneck isn't a problem, for bigger ones it is. 

And finally, you have to accept that CM will not regularly run as fast as a AAA 3D game that was made within the last few years.  The two are designed for different purposes and so it follows that their behavior is different too.  If you want to get to someplace for dinner quickly with a bit of a thrill, take a sports car.  If you want to move your apartment contents, take a delivery van.  Expecting a delivery van to go 200km/h is unreasonable, just as it is unreasonable to expect a 2 seat sports car to be a good choice for moving your apartment :D

Steve

Link to comment
Share on other sites

1 hour ago, Battlefront.com said:

I think this discussion is showing how difficult it is to pin down a performance issue.  It's never as easy as "the game code is the problem" or "the CPU is the problem", etc.  It's always some combination of things, including stuff that isn't normally looked at on the mother board.

I didn't think for a second it would be easy. :) 

1 hour ago, Battlefront.com said:

It is absolutely impossible to compare the performance of one game against the performance of CM when trying to troubleshoot framerate issues in CM.  The reason is that every single game hits different parts of the hardware differently.  On top of that, few use OpenGL and that means those games might not even use the same hardware AT ALL or to the same extent that CM does.

Yes, that's why I while I said I have no particular problems with other 90+ games I currently have installed on my PC that probably none of them run on OpenGl chasis. 

 

1 hour ago, Battlefront.com said:

The lesson here is that expectations should be pegged to what other CM players experience, not what the guy down the street gets with a first person shooter game.  It's a comparison that doesn't help at all.

Yes, I am and I was fully aware of that.

 

1 hour ago, Battlefront.com said:

For sure CM doesn't run as fast or as smoothly as the AAA games out there.  Those games probably spend more money on optimizing for hardware than we spend in a year on everything we do.  Hell, probably 2 years! 

I know that your CM game family is a little miracle on itself. :) 

 

1 hour ago, Battlefront.com said:

Those AAA games also have the luxury of cutting out something if they aren't happy with the performance.  For example, those AAA games would never put their equivalent of BP1 Op Linet II in release and would probably put some restrictions so that nobody could make something that.  We don't put on restrictions, so the "worst" of CM has no equal to the "worst" in another game because their worst is far, far less stressful on the system than ours is.

Here I would raise a question why include such a stock mission if it runs ok only on some hardware rigs that have all the elements well aligned with the "CM stars" (and again we don't know what works well besides that Intel CPU users are probably gonna experience a better performance then AMD ones due to Intels ticking faster then AMD's - IPC while the game in 3D mode uses one core and thus faster IPC in most scenarios is better) without a accompanying warning that due to the hefty size and complexity of the scenario not all players will be able to get a playable experience (experience may vary - a simple note warning the players what they are getting into). :D 

I also wanna say that I am not owning many AAA titles, have all sorts of indie and niche games.  I ain't a spoiled brat but I admit I am a visual slut (but only when gameplay is decent or better otherwise visuals mean not much to me if the game otherwise sucks). 

1 hour ago, Battlefront.com said:

C3K pointed out something very important to keep in mind... Combat Mission pushes around way, way, way, WAY more data than the AAA games you see out there.  We have hundreds, if not thousands, of unit objects in good sized scenarios which all demand their own graphics, data, AI, etc. resources.  CM also uses detailed physics designed to simulate real life, not quick/dirty approaches that are there for effect and not simulation.  This not only affects load times, but it also means any sort of CPU or I/O bottlenecks are going to be pretty noticeable with CM vs. a AAA game.

Yes I know, your game is very special and that's why I haven't forgotten about it and moved on to something else! 

 

1 hour ago, Battlefront.com said:

For Combat Mission, the average scenario should run between 25 and 30fps most of the time for most people.  Big bastard scenarios are the ones that are likely to crash the FPS for one person and not do so for another person.  Bottlenecks have a sort of compounding effect that gets disproportionally worse as the strain goes up.

You should really make this clear for the current and new users then. "Our games run on average 25 to 30 frames. Retro sells." ;) Joking aside sporting a Freesync or Gsync monitor I hear helps elevate such low framerate count. Knowing this loud and clear is better then being miserable asking yourself why any given rig can't churn out more frames and thinking either something is conflicting with the game/hardware/drivers or all these things. See because certain user's rig surpasses the recommended specs he/she might thought that he/she could run the game on all maxed out getting 60 frames on 1920x1080 monitors as is the standard with most games utilizing 3D environment... Maybe write a note about what limitations users might encounter? 

 

1 hour ago, Battlefront.com said:

Nearly a 20% speed increase by going from 1600 to 1280.  How do either of these compare with your FPS on your widescreen?

As I already stated I had 22 frames on my ultrawide with the exact same settings. 2 frames less... 

 

1 hour ago, Battlefront.com said:

This indicates that the scenario is overwhelming some part, or parts, of your system.  You mentioned your CPU, but that's likely not it.  As you've pointed out, your CPU is above specs (and you now understand more about cores and why they don't matter).  So it's probably buss speed or a sub component on your motherboard that was made by the "lowest bidder" instead of "highest standards".  This can be like driving a Ferrari with a plugged air filter.  Doesn't matter what the specs of that Ferrari say, or how much it costs, if the air filter is plugged I could probably accelerate better in my Honda.  Computers are like so many other things -> only as fast as their weakest component.  And even then that's highly task specific in terms of impact.

We are getting somewhere then. I don't remember my Asus motherboard of being the lowest/cheapest possible pick. I remember going through many reviews and picked this one as a good quality to price ratio. But it might have some setting borked or something or in the end it might not and is just crap in it's vanilla state. What would you suggest me to do to check if everything is set right in the first place? What would be the steps one needs to undertake to find out if there is anything off with the motherboard? I know it's a very broad question for every model differs but there must be some basic steps that IT guys do, right?  

 

1 hour ago, Battlefront.com said:

For sure CM's fluidity and overall performance is not great compared to the AAA games. As I explained in my previous post, there are very good and understandable reasons for that.  Setting expectations for CM within what CM can do is really the best way to go.  If most people say "I get 25fps for X scenario" then you're not likely to get 40fps and wanting it to be so won't make it happen.

Yeah, that is the only right way to go - accept it for what it is. A hardware/drivers nitpicky gem that provides hundreds upon hundreds of hours of entertainment on a low framerate . ;) 

 

59 minutes ago, Battlefront.com said:

You keep making a big mistake and making it over and over again no matter how many different people try to explain things to you... this is not a simple A + B = C situation.  The longer you hold onto that sort of bad logic the harder it will be to help you.  Or want to help you.

An example is with the AMD disclaimer you think we should put on our website.  Schrully didn't say ANYTHING OF THE SORT.  What he said is that in a lab the Intel chips have a theoretical advantage over an AMD chip.  That difference likely has absolutely nothing to do with the slowdowns you are seeing.  So no need for a disclaimer for something that isn't relevant and it is distracting for you to suggest that it is.

Steve, you are right - I said that in the light of the issues I'm experiencing but if something is up with my motherboard then that point is moot  - I answered Schrullenhaft before I saw your posts since his was the last post on the page and I started replying directly to him without seeing what you wrote. That said Schrull said this: "Admittedly newer Intel CPUs will have an advantage over the AMD FX series since they execute more 'instructions per clock' (IPC) resulting in better single-core performance (comparing at the same clock speed)." Since CM when not loading is using only one core it means that the Intel CPU's are more optimized for the game. No need to get all "attacky" on me for I am not being a dic* towards you, him and the game in general. 

 

59 minutes ago, Battlefront.com said:

Screen resolution is probably the single biggest cause of problems for you.  Your own tests pretty much prove it.  There's still something else on your system that is causing poor performance for big scenarios, but your CPU is unlikely the cause of it.  As I and others have said, it's probably something or some combination of things which are bottle necking what CM can push onto your screen.  For some scenarios the bottleneck isn't a problem, for bigger ones it is. 

What is the native monitor resolution the game can be played on with all the settings set to max while getting minimum 25 frames on the suggested system specs?  Do suggested system specs take into consideration the standard monitors used for gaming now which are 1920X1080? Do the suggested system specs take into account stock BP1 Op Linnet scenarios? They should by my understanding but yours might be different. 

In my case as you know considerably smaller monitor native resolution for the huge Linnet scenario doesn't make a difference thus we are now pointing the finger towards the biggest hardware suspect - currently the motherboard. I fully acknowledge other smaller scenarios fit the 25-30 frames on oldschool tiny square (1280x768) monitor that I am not going to plug on my system for just when I will play CM games because my desktop icons for one get all messed up and also because I have no space on my desk to sport two monitors, etc. :)    

If Schrullenhaft experiences the same 7 to 8 frames on his machine in that scenario then my motherboard isn't at fault and that will spare me the time to fiddle with it in search of the fubared setting on it or a physical malfunction of sorts.  

 

59 minutes ago, Battlefront.com said:

And finally, you have to accept that CM will not regularly run as fast as a AAA 3D game that was made within the last few years.  The two are designed for different purposes and so it follows that their behavior is different too.  If you want to get to someplace for dinner quickly with a bit of a thrill, take a sports car.  If you want to move your apartment contents, take a delivery van.  Expecting a delivery van to go 200km/h is unreasonable, just as it is unreasonable to expect a 2 seat sports car to be a good choice for moving your apartment :D

Yes and yes. Accepted and no need to be repeated again with me. 

 

Edited by Hister
Spelling errors, better clarifications, etc.
Link to comment
Share on other sites

3 hours ago, Schrullenhaft said:

As far as I'm aware the 'minimum' and 'recommended' specifications haven't really been changed/updated since CMSF in 2007. The engine has been updated significantly since then with regards to graphics. This is especially true of the 'version 2' engine that actually reduced the 3D model complexity and replaced it with bump-mapping to provide the missing model detail - with the idea of improving video performance. However there may be more calculations for the CPU to perform with the newer engines (more complex LOS/LOF, etc.).

I would generally assume that an AMD FX 6300 should have been a decent performer for CM games. Admittedly newer Intel CPUs will have an advantage over the AMD FX series since they execute more 'instructions per clock' (IPC) resulting in better single-core performance (comparing at the same clock speed). To my knowledge the only code that is 'multi-core/thread' in CM is the scenario loading process. It was one of the few places in the code that could easily benefit from more cores/threads without having to significantly change the engine. Interestingly AMD GPUs suffer a huge hit in the scenario loading process for some reason. I can only assume that there is some weakness in the AMD OpenGL video drivers that is getting hit hard by the loading process (perhaps this is one of those processes that has to run on the CPU instead of the GPU for AMD as Steve mentioned earlier). Running a Nvidia GPU on the same system could potentially result in load times that are 2 - 3 times faster. Of course the issue in this thread isn't scenario loading times.

As others have pointed out, you're running a huge scenario for comparison; something that tends to slow down even the fastest machines. This may be something of a CPU limitation/bottleneck, but I have no idea what is possibly being 'calculated' during screen movement that could result in lower FPS. In general CM video performance is slowed down by the number of units on screen, the number of buildings, the amount of trees and the complexity of the map (the number and magnitude of elevations). With a larger horizontal resolution ('2560') than the average '1980' your 'window' is much larger for what needs to be rendered on screen. The CM graphics engine tends to cut down on textures (their number and detail) after a certain range in order to maintain 'performance'. I don't know what the algorithm is for these calculations, but they may be a bit dated in their assumptions of GPU performance and memory capacity. However it is quite probable that those 'assumptions' may not be easily changed with the current engine. There are also fewer LODs for each model that other major games may have more of ('art' is expensive), which could result in somewhat smoother framerates if there were more LOD level models/textures.

I have an FX 8320 which also runs at 3.5GHz (it just has 8 cores instead of 6 like the FX 6300 has) and a GTX 660Ti that I could test (on a cheap Asrock motherboard with an AMD 880 chipset). I'll have to install CMBN and see what I get, but I suspect the performance will be pretty much similar to what you are seeing already.

I'm quoting Schrullenhaft because my post will seem much more accurate and helpful if I include anything he's written.  ;)

Consider him the Oracle.

 

Link to comment
Share on other sites

An update:

So I tried fiddling with the CPU and motherboard just to see if something might work in regards to better performance. 

-Reinstalled the latest bios.

- Reinstalled the latest motherboard chipset.  

- I unparked all cores (I know the game uses only one core but did this in order if the core which the game utilizes would not be working well under previous settings - I don't run on windows 10 so had to do this manually).

- I allotted the suggested 12 gigs of virtual RAM on my SSD C partition where my windows is installed (some reports suggested this step helps with higher framerate).

- I was swapping between different power options in the bios (went from optimal where power is dynamic to normal when it's always the same and not adapting).

- Turned off the turbo booster (some reports suggested that it was sometimes guilty of stutter in games).

- Checked the RAM frequency used and upped it from 1300 to 1600 (although I am not sure if this is better because if I remember correctly I read somewhere that AMD's CPUs need lower RAM frequency to function better). 

 

So far no change when it comes to Combat Mission - framerates are the same as before.  
 

Edited by Hister
Added the RAM frequency change I did.
Link to comment
Share on other sites

On 10/24/2017 at 4:48 PM, Battlefront.com said:

 

On 10/24/2017 at 4:10 PM, c3k said:

I've got mine driving a 1920x1080 screen. Yours is pushing a widescreen 2560x1080. So, simple maths tell us that your card has a 33% greater load on it...for every frame...than mine does.

I was waiting for Hister to start up a new thread before addressing this, but since you brought it up and I moved the old content to a new thread, time to dig into this one :D

I'd say this is the #1 likely cause of problems for Hister.  It's simple math... the more polygons on the screen, the more strain that is put on the hardware.  Having a huge scenario seen from a high altitude with a massive screen size setting and good quality settings is simply not going to work out very well.  So first thing I'd advise Hister is to change reduce the screen resolution down to something more reasonable and see how that affects overall performance.

{...}

The solution is to figure out why the hardware is getting overwhelmed.  As stated, the most likely suspect is the massive screen resolution.  Turn that down and you will, at the very least, increase your top speed and decrease how slow things get.  You might still porpoise, but it will be less extreme and therefore less noticeable.  If it's still unacceptable to you, then figure out what else can be changed to reduce the strain on the card.  Smaller screen resolution, lower the quality settings in some way, turn off fancy card features, etc.  There's no one simple answer because there's far too many individual PC specific variables at play.

 

Previously I forgot to test the game as per Steve's suggestion with lower then native ingame screen resolution on my ultrawide 1080 monitor. I have chosen the lowest possible ingame resolution of 1024x768, exited the game and restarted it. Ingame options set to the usual balanced/balanced as per other tests I made. The framerate as I expected was low also in this small resolution -> only 24 to 25 so by lowering the ingame resolution to 1024x768 I got 2 to 3 more frames only which testifies my screen resolution is not the one bottle-necking the game and it also says Steve's first most likely suspect is now likely excluded.  

Screenshot attached - game can't be played in this resolution any way because it gets stretched too much but this can't be observed in the screenshot because it gets unstretched when it is taken. Playing in this resolution would be possible if the game would not stretch and remain in it's native size window with black bars being applied on either side of it to fill the native monitor size like the starting menu is handled. In my case though it wouldn't help at all since again screen resolution is not what is making my game throttle. 

BP1 Copse scenario on 1024x768 resolution balanced balanced.jpg

 

Edit: Doing the same test with ingame 1152x864 resolution gave me the same framerate (24/25). With ingame setting of 1280x960 I was interestingly enough given 27 to 28 frames which is counter intuitive due to the bigger resolution being used. 

Edited by Hister
Link to comment
Share on other sites

I ran the same scenarios as Hister using my system with the following specs:

AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
8GB of DDR3 1600 (CAS 9)
MSI GeForce GTX 660 Ti  - 388.00 driver
Asrock 880GM-LE FX motherboard (AMD 880G chipset)
Samsung 840 EVO 250GB SSD
Windows 7 Home 64-bit SP1 (latest patches)

Running at a resolution of 1920 x 1200.

Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).

Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.

There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.

I decided to compare this to an Intel system that was somewhat similar:

Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
16GB of DDR3-2133 (CAS 9)
eVGA GeForce GTX 670 - 388.00 driver
Asrock Z97 Killer motherboard (Z97 chipset)
Crucial MX100 512GB SSD
Windows 7 Home 64-bit SP1 (latest patches)

Running at a resolution of 1920 x 1200.

Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.

I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.

What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.

So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).

As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.

Edited by Schrullenhaft
Link to comment
Share on other sites

Oh my, thank you @Schrullenhaft for doing such an extensive test! 

This chapter can be closed now. I can finally "rest in peace" when it comes to CM games performance. ;) Results are very telling. I'll put all the settings that I have changed in the bios and windows back to what they were since no tinkering with them will make any change to how my computer performs with this game. Also this makes me certain my next rig won't be that much of an improvement over the current one and this spared me the disappointment I would otherwise probably have over the future results.

CM is way ahead of it's time - which consumer oriented CPU out there can process 700.000 instructions in a second on a single core and still have room to breath?    

I think the recommended hardware specs should be updated, at least when it comes to the CPUs.

13 hours ago, Schrullenhaft said:

As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.

When I upgraded the rig I kept the 550Ti on it from the previous rig and only later on George MC sent me his 660Ti. The similar performance of the GPU's on the same rig is probably because the CPU is the bottleneck here.  I could sport GTX 1080 and got no different results if I also wouldn't swap the CPU. I have my eyes set on the 8th generation Intel Core i5 8600k when it comes available again and mined GPU and RAM prices disappear (will they ever?). If the current price for the processor holds I hear it is the best value for money when it comes to gaming.  

Schrullenhaft, no need to do any other tests for me, very much appreciated what you have done. If you want I can do some testing for you in return if you need anything to learn out of my hardware.  

Link to comment
Share on other sites

  • 2 weeks later...

Not surprisingly, Schrullenhaft's post is as detailed as it is interesting and informative!  Great stuff!

Generally speaking, CM2 has always had difficulty handling huge scenarios loaded with units when a camera view is up high.  As Schruly correctly diagnosed, the problem can be more CPU than video card related.  He mentioned a few reasons why, but I'd like to explain the technical side a bit more.

One of the major problems with doing a game like CM is the amount of data that has to be moved around.  In FPS games, especially those of the mid 2000s, you'll not that there's very few "actors" (independent elements with their own data, AI, animations, etc.) and only a subset are ever in one place at one time.  Even a fairly large FPS game of the day maxed out at about 64 total actors.  That's like fighting a battle in CM with one platoon vs another platoon, and nothing else.  Which means that even a fairly modest sized CM game has upwards of 10-20 or more times as many actors as a FPS game.  That right there is a big deal because that means 10-20 times more TacAI calculations, 10-20 times more LOS checks, 10-20 times more pathfinding calcs, 10-20 times more interactions with terrain, etc.  But it gets worse :D

Each actor in CM is generally more complex in terms of data and capabilities than the average FPS actor.  At a minimum, a CM actor is no less complex.  Which means that the 10-20 greater demands on headcount alone goes up when factoring in complexity. 

Next, we have the terrain.  CM's terrain is pretty complex stuff.  Not just in terms of what is where, and how much of it, but also the complexity.  For the most part FPS terrain has no nuance, where X unit moving through Y terrain has Z effect.  Instead the fairly simple actors are lumped into a few categories (infantry, tracked vehicle, wheeled vehicle) and the terrain is coded to simply pass/fail (allow/block) movement in that terrain by each of the categories.  CM simply isn't coded that way for obvious reasons.

OK, so both units and terrain in CM are more complex than a maxed out FPS game.  You then raise the camera up into the air and guess what?  Now you have all that stress and strain on the CPU plus CM's need to theoretically show you the results instead of just calculating them in the background when the units are off screen.  The more complicated and graphically intensive a map is (trees are the big killers), the more work the CPU has to do to keep the GPU fed with data.

On top of this, CM lacks the sorts of shortcuts and design flexibility that FPS games have.  In a FPS game if the  framerate takes a hit from something, there's a lot of flexibility fo rthe evelper to limit its impact.  We can't limit the camera's perspective, for example.  We can't limit what terrain options a map has.  We can control the maximum map size, but we can't control how dense the terrain and units are.  We can't control the variety of units on each side (without facing a certain and brutal customer rebellion!).  Etc, etc.

This is on top of all the other factors we've talked about, such as these FPS games spending millions on programming in optimizations, card companies catering their drivers to the big AAA games (Bohemia probably has a hotline direct to a hugely senior manager at every hardware company, for example), hardware is not optimized for the stuff we're trying to do, OpenGL has some support issues with certain cards and drivers, etc.

The thing to take away here is that if Bohemia tried to make a CM2 type game 10 years ago it would probably have about the same performance as what we made.  Better in some places here and there, probably a little better on average, but overall it would not run like a FPS game in terms of consistent framerate.  Throwing money and bodies at these sorts of problems hits a point of diminishing returns very quickly.

Steve

Link to comment
Share on other sites

1 hour ago, Heirloom_Tomato said:

If I understood the information from @Schrullenhaft correctly, would having the game be able to take advantage of multiple cores on a cpu give a boost to frame rates?

Charles has looked into this several times and came up with is "no".  The primary problem is that muli-processors are better designed to chug through a list of tasks to do, not as good when being asked to constantly switch back and forth between different tasks in a very unpredictable way.

In theory Charles could do something like have all TacAI calculations done on a 2nd processor while the first one chugs away on the rest of the stuff.  Maybe he could have something else there as well, but IIRC he said pathfinding is not one of them.  Something about the need to pass data back and forth being too complex and constant to efficiently "outsource it" to another processor.  I think something like LOS checks could work, but this would require gutting a huge part of the game engine and that's absolutely not feasible.

The average computer user things "I have 4 cores which means if a game were multi-core then I'd have 4 times the speed".  No, it doesn't work that way.  Not even close to that.  In fact, most games are either "bound" to one CPU exclusively or, like CM Engine 3, offload very limited tasks to a second (or more) CPU.  The chosen activities, like with CM, are ones which are a queue of predictable tasks which have a predictable conclusion and little chance of running into problems with data management issues.  In our case helping load maps and save games is the primary use of more than one core.  There's not that much beyond it.

If you are curious about why this is, here are a couple of pretty good threads:

https://stackoverflow.com/questions/571766/why-dont-large-programs-such-as-games-use-loads-of-different-threads

Still, if CM2 were written over from scratch today there would be more use of other processors.  Not a ton more, but more than what we added into Engine 3.

Steve

Link to comment
Share on other sites

Great stuff.

Some information. One of my rigs has an i7-6700k cpu. Pretty good. 4Ghz (?), 4 core/8 threads. (I forget if I have it overclocked atm.) It's paired up with an AMD R9 390 gpu driving an Asus MG279Q display. 2560x1440, Freesync capable. I've just upgraded to the latest Radeon drivers, 17.11.1 and played with the game settings. Using FRAPs, this rig drives CM (various games, various battles) at 13 fps.

That sounds horrid.

In actual use, if that "13" weren't sitting in the corner, I'd swear it was going near to 60 fps. Why? The fps is amazingly consistent. (Remember, movies are shown at 24 fps.) This is the power of Freesync. I can set a target framerate (in the case of CM, the target is 60. Aim for the stars. ;) ), and the hardware tries to maintain it. (My monitor Freesync limit is 90 fps, btw.) I'm using all settings in-game maxed out and have a bunch of AMD tweaks to improve visual quality.

All that data? Well, I've got the games installed on a spinner. That affects load times. In-game, I'd be reasonably certain that the game uses RAM. (This one has 64GB installed.) If I created a RAM-Disk and loaded CM onto it (or onto an NVME drive), I'd probably see load times cut down. I don't think there'd be any other changes.

My point? Even a decent machine gets hit hard by CM...but it can still maintain an amazingly smooth play experience. Consistent frame rate is more important than actual frame rate.

Link to comment
Share on other sites

Yeah, CM's framerate performance only increases with a SSD type drive if CM doens't have enough DRAM allocated to it.  Which, for a huge scenario and relatively low availability of DRAM, could happen.  Normally, however, most rigs made in the past few years have enough DRAM wired in that this shouldn't be an issue PROVIDED the player doesn't have memory sucking stuff running in the background.  A browser with lots of tabs open can consume 1GB pretty easily, for example.  So this is a good time to remind people that shutting down unnecessary apps while playing CM is generally a good idea.  Unless, of course, you have C3K's huge chunk of silicon memory welded onto the motherboard ;)

Which reminds me to remind everybody that clock speed, not cores, is what one should think about in terms of speed.  An Intel Core 2 Duo running at 3 GHz is going to do somewhat better in stressful circumstances than an Intel Core 2 Quad running at 2.83 GHz, and an Intel Core 2 Duo running at 2.66 GHz isn't going to run that much worse under ordinary circumstances.

Also, laptops can be a bit of a problem as often their caches aren't as beefy or fast as desktop models.  The cache is quite important.

Steve

Link to comment
Share on other sites

Thank you Steve for taking the time to make such a detailed addendum.

Yeah, CM is ahead of it's time, no consumer hardware can run it the biggest stock scenarios at 60 frames or more which is the 3D gaming norm novadays. I'm satisfied I've come forward with my issues, I'll defenitely have an easier time deciding what hardware to buy - Ryzen is out of the picture really, Intel only for CM games (and all the others which resort mostly to one core per CPU). 

13 frames on Freesync monitor producing the same result as 60 frames on a classic  non sync'ed monitor!? I am personally reserved to believe until I see it myself but then again I have no experience with these monitors so you might be fully right c3k (of course each eyes see things differently, as I said for me going below 25 is a horrible experience, some can play at 15 no problemo).

Steve what I noticed and is always present is what I already mentioned but would like to put it forward because it wasn't directly tackled by you in my initial posts. When I start the scenario I have much higher frames, if I go with the camera just straight forward from the starting position those frames remain high.  then I pan camera around 360 degrees and return to the same starting camera position and my frames drop from 20 to 70%. That is consistent in the scenarios I played. Do oyu maybe now why does this happen? It seems for a tech simpleton like me this is a "memory leak" issue. Is it WAD or is something off with this?   

  

Link to comment
Share on other sites

14 minutes ago, Hister said:

 

13 frames on Freesync monitor producing the same result as 60 frames on a classic  non sync'ed monitor!? I am personally reserved to believe until I see it myself but then again I have no experience with these monitors so you might be fully right c3k (of course each eyes see things differently, as I said for me going below 25 is a horrible experience, some can play at 15 no problemo).

  

LOL... the 13 fps is very smooth. FWIW, I am sensitive to screen refresh. I can see 60 hz...and don't like it. (My peripheral vision detects 60 hz or lower. Straight on it's much more subtle, but still visible. Yes, my career and personal well-being has depended upon visual acuity. Luckily, not so much any more. ;) )

Freesync/Gsync are very cool technologies which smooth out the video experience. I recommend testing it out at a store, if you can. Or at a friend's. Whichever is cheaper. :)

Edited by c3k
Link to comment
Share on other sites

Remember that framerate sample numbers are averages, not a true representation of what your eye is being subjected to.  25 fps average might contain micro bursts of much higher and much lower rates.  Some people are more sensitive to these very small changes than others.  Same thing about refresh rates, like C3K pointed out. 

A non computer example of this is fluorescent lighting.  Most people most of the time can work under pure fluorescent lighting, but some can not.  They get headaches or worse even after a short time.  However, a larger group of people have more subtle negative effects, such as feeling fatigued.  If you ask a group of 100 people what their experiences are like, most will say "not a problem" but some will say it is horrible for them.  Nobody is right or wrong, it's just specific bodies reacting differently to the same conditions.  It is very similar with video gaming.

The solution for gaming, as it is for fluorescent lighting, is to work hard to keep the "flicker rate" down as much as possible.  CM prioritizes smoothness over top speed.

1 hour ago, Hister said:

Thank you Steve for taking the time to make such a detailed addendum.

No problem!  This is a complicated subject and the more we can learn from it the better.  We are always learning just as you guys are.

1 hour ago, Hister said:

13 frames on Freesync monitor producing the same result as 60 frames on a classic  non sync'ed monitor!? I am personally reserved to believe until I see it myself...

 

I found this very informative article on the subject:

https://us.battle.net/forums/en/overwatch/topic/20749876977

1 hour ago, Hister said:

Steve what I noticed and is always present is what I already mentioned but would like to put it forward because it wasn't directly tackled by you in my initial posts. When I start the scenario I have much higher frames, if I go with the camera just straight forward from the starting position those frames remain high.  then I pan camera around 360 degrees and return to the same starting camera position and my frames drop from 20 to 70%. That is consistent in the scenarios I played. Do oyu maybe now why does this happen? It seems for a tech simpleton like me this is a "memory leak" issue. Is it WAD or is something off with this?  

If there was a memory leak the game would eventually crash, so it's safe to say there is none.  And I can say that working with Charles for 20 years now, memory leaks are not something his code is prone to producing.  I'm sure there have been some over the years, but I for one can't think of a single example.

What is likely happening is your movement is forcing CM to presume you might be able to see them and therefore divert resources to them.  Moving the camera back to the ground and away from units tends to get the framerate going higher again (in my experience anyway).

Steve

Link to comment
Share on other sites

1 hour ago, Battlefront.com said:

What is likely happening is your movement is forcing CM to presume you might be able to see them and therefore divert resources to them.  Moving the camera back to the ground and away from units tends to get the framerate going higher again (in my experience anyway).

Steve

I'm not changing the elevation of the camera. I stick to the ground all the time, pan the camera around my starting unit and when I return at the starting viewpoint position my frames are much lower then when they used to be at the start. If I stay at the ground at the start of the mission and just go simply straight forward and then return to the starting position with the reverse movement nothing changes, frames remain the same. As soon as i pan to the left or right or change elevation frames drop. It's a funky thing that thing.  

 

1 hour ago, Battlefront.com said:

Nobody is right or wrong, it's just specific bodies reacting differently to the same conditions.  It is very similar with video gaming.

Yp. That's why I have sub 25 frames issues when I get irritated while playing it even for short timespan and others don't. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...