Jump to content

Schrullenhaft

Members
  • Posts

    9,199
  • Joined

  • Last visited

  • Days Won

    3

Reputation Activity

  1. Like
    Schrullenhaft got a reaction from Captain Black in Update problem   
    Here's the link to the store for the Windows version of the CMBS 4.0 Upgrade.
  2. Upvote
    Schrullenhaft got a reaction from MOS:96B2P in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
  3. Upvote
    Schrullenhaft got a reaction from sburke in CMBN persistent crash Scottish Corridor Campaign, Hansel and Gretel   
    Sorry to take so long to look at this. I loaded up the saved campaign on my 'full 4.0 installation' of CMBN and was able to play it for 28 minutes without issue (saved the game at that point). I played the game in a fairly passive manner, not issuing a lot of or detailed commands. 
    I played this on a PC with Windows 7 64-bit and an i5 2500k with 16GB of RAM and a GeForce GTX 770 video card (with NOT the latest drivers). Everything played fine and I even played it twice; the first time for about 6 minutes before quitting the game... I had forgotten when exactly you were experiencing the crashing, so I had to load it up again and play for a bit longer to make sure I was duplicating the issue.
    This install should have been a '4.0 full' installer, but I can double-check on another installation that I'm fairly certain would also be that.
  4. Upvote
    Schrullenhaft got a reaction from Hardradi in CMBN persistent crash Scottish Corridor Campaign, Hansel and Gretel   
    Sorry to take so long to look at this. I loaded up the saved campaign on my 'full 4.0 installation' of CMBN and was able to play it for 28 minutes without issue (saved the game at that point). I played the game in a fairly passive manner, not issuing a lot of or detailed commands. 
    I played this on a PC with Windows 7 64-bit and an i5 2500k with 16GB of RAM and a GeForce GTX 770 video card (with NOT the latest drivers). Everything played fine and I even played it twice; the first time for about 6 minutes before quitting the game... I had forgotten when exactly you were experiencing the crashing, so I had to load it up again and play for a bit longer to make sure I was duplicating the issue.
    This install should have been a '4.0 full' installer, but I can double-check on another installation that I'm fairly certain would also be that.
  5. Upvote
    Schrullenhaft got a reaction from sburke in Could not initialize OpenGL graphics.   
    Sounds like you may need to install/reinstall your video drivers. OpenGL drivers typically come with your video drivers. On occasion (and somewhat rarely) I've seen a Windows Update version of some video drivers possibly not have the OpenGL files that may normally be part of the driver installation. However I don't really see that anymore. What Windows version are you running and what video card/chip do you have ?
  6. Like
    Schrullenhaft got a reaction from Chrizwit3 in Game crashes during command phase.   
    When the games crash during the command phase are you issuing any particular type of order ? Are you issuing orders, looking around the map etc. or are you leaving the game idle (possible power savings issue) and then it is crashing ?
    Is this CMBN or is it another game or possibly is this happening with several/all of the CM games ?
    Do you have anything running in the background (utilities, browsers, etc.) ? Do you know what is loading up a startup ? You may want to check by right-clicking on the Start Menu and selecting 'Task Manager' from the popup menu. Then go to the 'Startup' tab and in here will be a list of programs/applets/utilities that launch at startup. If you would like to remove an item from launching at startup, right click on its listing and select 'Disable' from the popup menu and it should NOT load the next time you launch Windows.
  7. Like
    Schrullenhaft got a reaction from Chrizwit3 in Game crashes during command phase.   
    the 'Wondershare Studio' is some sort of collage/scrapbook software that appears to get installed with other programs. If you're not using it, then I suggest disabling it (though it likely isn't the cause of the issue here). We've seen problems with 'Nahimic' audio software. This often gets installed with some MSI laptops/motherboards and other systems. I can't recall off-hand if the software prevented the running of CM or caused problems while it was running. I don't recall any problems with the Realtek Audio Manager (usually a 'red speaker' icon in your lower right tray). If it isn't being used, it can also be disabled without affecting your audio.
    The 'chrome' I assume is simply a Chrome browser being launched at startup. Removing it may speed up boot up times, unless you almost always head to the internet on each boot. The 'Delayed Launcher' may be some Intel software that is involved with 'system restores'. You could disable that and it supposedly should make booting a bit faster.
    I personally haven't really seen too many problems with Windows Defender. I do use other anti-virus security programs, so they typically run the security on the computer while Windows Defender acts more as a reminder service about updates or security being off/disabled. As a primary/sole security system you could add an 'exclusion' for the CM executables/folders in Windows Defender. If you're running a separate anti-virus/security program, then you could do something similar for it. However exceptions/exclusions really are geared for when the program will not run at all. It is somewhat rare for the security software to interfere once the program is running (at least in the manner that is being seen here).
    Are you overclocking your video card or CPU at all ? If so, you may want to down-clock them or return them to the default clock speed and see if that makes any difference. 'Factory overclocks' typically should be fine (since they're the defaults as programmed by the manufacturer).
    You may want to run some memory diagnostics on your computer (typically requiring a reboot into the diagnostic's OS), such as the free edition of Memtest86. There are some tests with the memory diagnostics that can cause almost any memory to not pass (certain 'hammer' tests), but most tests should be applicable and if there are errors you may need to replace your RAM or possibly change settings/voltages on it in your BIOS/UEFI setup.
  8. Like
    Schrullenhaft got a reaction from A Canadian Cat in can not play these game any more?   
    I only know stuff that Google tells me...  I'm on the glideslope of mental oblivion myself.
  9. Upvote
    Schrullenhaft got a reaction from sburke in can not play these game any more?   
    I only know stuff that Google tells me...  I'm on the glideslope of mental oblivion myself.
  10. Like
    Schrullenhaft got a reaction from George MC in can not play these game any more?   
    I was able to get the entire CMSF series installed and activated in Windows 10 Pro v. 1709 ('CE'). Avast needed to be disabled in order to install and activate the game/modules. Once the game was installed, activated and patched exceptions were added in Avast for the game directory (the individual game executable files didn't seem to need an explicit exception) and the game needed to be run with 'Run as administrator' privileges. 'Runservice.exe' also did NOT need an Avast exception. Nothing was changed in regards to Windows Defender (whatever version installs with Windows 10 Pro). CMSF was installed to its default directory within the 'Program Files (x86)' directory. Avast did flag the 1.21 patch before it was disabled (something it and a lot of other security programs do with that executable).
    No other CM games were installed on this computer. The only security software installed is the free version of Avast and Malwarebytes (and whatever Windows Defender consists of). This computer also didn't have a lot of other software installed. Just browsers, Libre Office, Acrobat, Flash, possibly Java, etc. The 'LicCtrl Service' was started and running with no errors (a key to the problem being seen here). The 'runservice.exe' file is 16,384 bytes in size and has no special attributes set. The 'msvcr71.dll' (a Microsoft C Runtime library DLL) is 348,160 bytes in size, located in the 'C:\Windows' directory and has no special attributes set.
  11. Upvote
    Schrullenhaft got a reaction from A Canadian Cat in can not play these game any more?   
    I was able to get the entire CMSF series installed and activated in Windows 10 Pro v. 1709 ('CE'). Avast needed to be disabled in order to install and activate the game/modules. Once the game was installed, activated and patched exceptions were added in Avast for the game directory (the individual game executable files didn't seem to need an explicit exception) and the game needed to be run with 'Run as administrator' privileges. 'Runservice.exe' also did NOT need an Avast exception. Nothing was changed in regards to Windows Defender (whatever version installs with Windows 10 Pro). CMSF was installed to its default directory within the 'Program Files (x86)' directory. Avast did flag the 1.21 patch before it was disabled (something it and a lot of other security programs do with that executable).
    No other CM games were installed on this computer. The only security software installed is the free version of Avast and Malwarebytes (and whatever Windows Defender consists of). This computer also didn't have a lot of other software installed. Just browsers, Libre Office, Acrobat, Flash, possibly Java, etc. The 'LicCtrl Service' was started and running with no errors (a key to the problem being seen here). The 'runservice.exe' file is 16,384 bytes in size and has no special attributes set. The 'msvcr71.dll' (a Microsoft C Runtime library DLL) is 348,160 bytes in size, located in the 'C:\Windows' directory and has no special attributes set.
  12. Like
    Schrullenhaft got a reaction from Sgt.Squarehead in can not play these game any more?   
    I was able to get the entire CMSF series installed and activated in Windows 10 Pro v. 1709 ('CE'). Avast needed to be disabled in order to install and activate the game/modules. Once the game was installed, activated and patched exceptions were added in Avast for the game directory (the individual game executable files didn't seem to need an explicit exception) and the game needed to be run with 'Run as administrator' privileges. 'Runservice.exe' also did NOT need an Avast exception. Nothing was changed in regards to Windows Defender (whatever version installs with Windows 10 Pro). CMSF was installed to its default directory within the 'Program Files (x86)' directory. Avast did flag the 1.21 patch before it was disabled (something it and a lot of other security programs do with that executable).
    No other CM games were installed on this computer. The only security software installed is the free version of Avast and Malwarebytes (and whatever Windows Defender consists of). This computer also didn't have a lot of other software installed. Just browsers, Libre Office, Acrobat, Flash, possibly Java, etc. The 'LicCtrl Service' was started and running with no errors (a key to the problem being seen here). The 'runservice.exe' file is 16,384 bytes in size and has no special attributes set. The 'msvcr71.dll' (a Microsoft C Runtime library DLL) is 348,160 bytes in size, located in the 'C:\Windows' directory and has no special attributes set.
  13. Upvote
    Schrullenhaft got a reaction from MOS:96B2P in can not play these game any more?   
    I was able to get the entire CMSF series installed and activated in Windows 10 Pro v. 1709 ('CE'). Avast needed to be disabled in order to install and activate the game/modules. Once the game was installed, activated and patched exceptions were added in Avast for the game directory (the individual game executable files didn't seem to need an explicit exception) and the game needed to be run with 'Run as administrator' privileges. 'Runservice.exe' also did NOT need an Avast exception. Nothing was changed in regards to Windows Defender (whatever version installs with Windows 10 Pro). CMSF was installed to its default directory within the 'Program Files (x86)' directory. Avast did flag the 1.21 patch before it was disabled (something it and a lot of other security programs do with that executable).
    No other CM games were installed on this computer. The only security software installed is the free version of Avast and Malwarebytes (and whatever Windows Defender consists of). This computer also didn't have a lot of other software installed. Just browsers, Libre Office, Acrobat, Flash, possibly Java, etc. The 'LicCtrl Service' was started and running with no errors (a key to the problem being seen here). The 'runservice.exe' file is 16,384 bytes in size and has no special attributes set. The 'msvcr71.dll' (a Microsoft C Runtime library DLL) is 348,160 bytes in size, located in the 'C:\Windows' directory and has no special attributes set.
  14. Upvote
    Schrullenhaft got a reaction from c3k in can not play these game any more?   
    I was able to get the entire CMSF series installed and activated in Windows 10 Pro v. 1709 ('CE'). Avast needed to be disabled in order to install and activate the game/modules. Once the game was installed, activated and patched exceptions were added in Avast for the game directory (the individual game executable files didn't seem to need an explicit exception) and the game needed to be run with 'Run as administrator' privileges. 'Runservice.exe' also did NOT need an Avast exception. Nothing was changed in regards to Windows Defender (whatever version installs with Windows 10 Pro). CMSF was installed to its default directory within the 'Program Files (x86)' directory. Avast did flag the 1.21 patch before it was disabled (something it and a lot of other security programs do with that executable).
    No other CM games were installed on this computer. The only security software installed is the free version of Avast and Malwarebytes (and whatever Windows Defender consists of). This computer also didn't have a lot of other software installed. Just browsers, Libre Office, Acrobat, Flash, possibly Java, etc. The 'LicCtrl Service' was started and running with no errors (a key to the problem being seen here). The 'runservice.exe' file is 16,384 bytes in size and has no special attributes set. The 'msvcr71.dll' (a Microsoft C Runtime library DLL) is 348,160 bytes in size, located in the 'C:\Windows' directory and has no special attributes set.
  15. Upvote
    Schrullenhaft got a reaction from A Canadian Cat in PC Specs   
    I believe your 'build' should work fine for CM purposes. Admittedly it is always 'expectations' that color how good something may be. For CM single core CPU performance is the key for smooth (or 'near smooth') gameplay. While AMD has gotten a bit closer to Intel in terms of IPC (Instructions Per Clock/Cycle), it is still a bit behind. This is something you'll see when 'single core' benchmarks are run. AMD somewhat makes up for this in being a bit more affordable for the horsepower you are getting. An equivalent Intel system may cost a bit more. If you are willing to spend more (and possibly wait for availability... the latest Intel 8th generation CPUs tend to sell out) an i5 8400 might be a bit better of a performer for CM. The 8th generation Intel Core CPUs had a major change in the number of cores the CPUs have, with the mainstream CPUs moving from 4 cores to 6. This would be of no benefit to CM, but it helps Intel combat AMDs latest CPUs. One other difference is that the Ryzen 5 1500x is capable of being overclocked, while the i5 8400 does not officially support overclocking (locked multipliers, etc.). So you could get a little more performance out of the Ryzen, though it is possible that the i5 8400 may still outperform the overclocked Ryzen.
    Motherboard-wise an Intel board (Z370 chipset) will be a bit more expensive than the AMD B350 series (around US$30 - $50 on average). I believe you could use the same RAM, though you'll always want to check any memory QVLs (Qualified Vendor List) for a motherboard to make sure you're getting something compatible.
    The video card should be fine. Video cards are part of the graphics performance equation for CM, but not nearly as much as CPUs. More horsepower in the video card can allow for better texture filtering and anti-aliasing. Perhaps future versions of CM can benefit a bit more from high performance GPUs than the series does currently.
  16. Like
    Schrullenhaft got a reaction from Hister in PC Specs   
    I believe your 'build' should work fine for CM purposes. Admittedly it is always 'expectations' that color how good something may be. For CM single core CPU performance is the key for smooth (or 'near smooth') gameplay. While AMD has gotten a bit closer to Intel in terms of IPC (Instructions Per Clock/Cycle), it is still a bit behind. This is something you'll see when 'single core' benchmarks are run. AMD somewhat makes up for this in being a bit more affordable for the horsepower you are getting. An equivalent Intel system may cost a bit more. If you are willing to spend more (and possibly wait for availability... the latest Intel 8th generation CPUs tend to sell out) an i5 8400 might be a bit better of a performer for CM. The 8th generation Intel Core CPUs had a major change in the number of cores the CPUs have, with the mainstream CPUs moving from 4 cores to 6. This would be of no benefit to CM, but it helps Intel combat AMDs latest CPUs. One other difference is that the Ryzen 5 1500x is capable of being overclocked, while the i5 8400 does not officially support overclocking (locked multipliers, etc.). So you could get a little more performance out of the Ryzen, though it is possible that the i5 8400 may still outperform the overclocked Ryzen.
    Motherboard-wise an Intel board (Z370 chipset) will be a bit more expensive than the AMD B350 series (around US$30 - $50 on average). I believe you could use the same RAM, though you'll always want to check any memory QVLs (Qualified Vendor List) for a motherboard to make sure you're getting something compatible.
    The video card should be fine. Video cards are part of the graphics performance equation for CM, but not nearly as much as CPUs. More horsepower in the video card can allow for better texture filtering and anti-aliasing. Perhaps future versions of CM can benefit a bit more from high performance GPUs than the series does currently.
  17. Upvote
    Schrullenhaft got a reaction from Long_Fang in after 1.32 update the game won't start?   
    I just installed the entire CMSF series on a AMD Ryzen system running Windows 10 v. 1709 and the Avast (free) anti-virus. Avast interfered at times (especially with the patches), but after each module install, activating said module the game ran fine. This includes the 1.32 patch. In my case there was no need to make any changes to the DEP settings. That may not always be true for everyone and it could vary depending on the CPU and motherboard you have.
    Where was the 1.32 patch downloaded from ? I suggest selecting the 'Battlefront server' for the download (which will actually come from Battlefront's Sharefile account).
    For now, reinstall the 1.31 patch and make sure that ALL of the appropriate boxes are checked during the patch install (i.e. - checkmarks for each module that you have and the base game). See if CMSF runs now. Hopefully it does. Disable any anti-virus you have (temporarily) and run the recently downloaded 1.32 patch by right-clicking it and selecting 'Run as administrator' from the popup menu. Again, make sure that all of the appropriate boxes are checked in the patch installation. With that finished attempt to run CMSF and see if it gives you the same error (with your anti-virus/security software disabled). You will want to launch the game again with the right-click and 'Run as administrator' selection again. Hopefully this works (and the game has been patched to 1.32 as seen at the bottom middle of the main menu screen). If it does, go ahead and exit the game and then re-enable your anti-virus/security software and add an exception for the Battlefront directory that is typically in your 'Program Files (x86)' directory (assuming you're running the 64-bit version of Windows).
  18. Upvote
    Schrullenhaft got a reaction from A Canadian Cat in License Error   
    You will want to open up a ticket with the Helpdesk (www.battlefront.com > 'Support' in the menu bar > 'Helpdesk' > click on the blue '+new ticket' button in the upper right). I believe that they will need to send you a file to delete your current activation(s) and allow you to reactivate. Make sure to let the Helpdesk know that you are running CMFI + Gustav Line without the 3.0 and 4.0 Upgrades since there are different files for each version.
    My guess is that you made some sort of change (hardware or OS or even on a rare occasion a hardware driver) that has tripped up the copy-protection system to assume that it may be running on a different computer.
  19. Upvote
    Schrullenhaft got a reaction from A Canadian Cat in CMII - 4.0 Engine Upgrade   
    Wburn - You can purchase the 'Combat Mission 4 Upgrade Big Bundle (Windows)' to upgrade the games you got for US$25.00 for the download only version. With this purchase you should have access to the 'full installer' for each of the 4.0 versions of the games. So you could backup any in-progress, downloaded scenarios, mods, etc. and then completely uninstall/delete the current games that you have and reinstall them with these 'full installers'. They will have all of the current patches and content built-in. Anything that you have activated now should remain activated and you will just have to use the new license key (or possibly the '3.0' license key in some situations) to upgrade to 4.0.
    When you download you will want the 'full installer', which is obviously larger. There are 'upgrade' installers that just apply the 4.0 Upgrade alone and require that everything be patched up to the current version prior to the 4.0 Upgrade.
  20. Upvote
    Schrullenhaft got a reaction from A Canadian Cat in CMSF CTD   
    Good troubleshooting. The current (original) version of CMSF uses the eLicense copy-protection system. Since CMBN Battlefront has used a copy-protection system that they call the 'Online Activation System' or something to that effect. It is a different copy-protection system that does NOT have an unlicense function. There is no external 'service' and everything is wrapped up in an encryption system (which occasionally trips up some anti-virus/security programs with false positives). I don't know if CMBN and newer games would run into the same issue with the banking security software that CMSF did.
    I can only guess that the banking security software may have been going out and possibly shutting down programs/services that it doesn't recognize (most likely the 'runservice.exe') in order to prevent key-logging or some other eavesdropping program from taking your banking credentials.
    As IanL mentioned, there are no retail copies of Battlefront games. CMSF was the last game that Battlefront allowed another retail distributor to carry and since then the games have only been available on the website.
  21. Upvote
    Schrullenhaft got a reaction from borg in CMSF CTD   
    Good troubleshooting. The current (original) version of CMSF uses the eLicense copy-protection system. Since CMBN Battlefront has used a copy-protection system that they call the 'Online Activation System' or something to that effect. It is a different copy-protection system that does NOT have an unlicense function. There is no external 'service' and everything is wrapped up in an encryption system (which occasionally trips up some anti-virus/security programs with false positives). I don't know if CMBN and newer games would run into the same issue with the banking security software that CMSF did.
    I can only guess that the banking security software may have been going out and possibly shutting down programs/services that it doesn't recognize (most likely the 'runservice.exe') in order to prevent key-logging or some other eavesdropping program from taking your banking credentials.
    As IanL mentioned, there are no retail copies of Battlefront games. CMSF was the last game that Battlefront allowed another retail distributor to carry and since then the games have only been available on the website.
  22. Like
    Schrullenhaft got a reaction from Bulletpoint in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
  23. Like
    Schrullenhaft got a reaction from A Canadian Cat in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
  24. Like
    Schrullenhaft got a reaction from Badger73 in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
  25. Upvote
    Schrullenhaft got a reaction from BletchleyGeek in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
×
×
  • Create New...