Jump to content

Schrullenhaft

Members
  • Posts

    9,199
  • Joined

  • Last visited

  • Days Won

    3

Reputation Activity

  1. Like
    Schrullenhaft got a reaction from Sgt.Squarehead in İnitialize_national_tile_surface error   
    This issue was CAUSED by AMD. They had a bug in their video drivers that prevented the games from working (an issue with the way Visual Basic or similar libraries handled video calls, I believe). Slightly older versions of the driver didn't have the bug, but newer video cards couldn't use those older drivers (there is a minimum version for each new video card). This happens every-so-often for games. The big AAA titles usually get a quick fix (within a month or two), but smaller game developers may not see a fix for a long time.
    I think I actually tried one of the 17.10 AMD driver releases and found that SC WW1/Breakthrough worked properly.
  2. Upvote
    Schrullenhaft got a reaction from MOS:96B2P in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
  3. Like
    Schrullenhaft got a reaction from Bulletpoint in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
  4. Like
    Schrullenhaft got a reaction from A Canadian Cat in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
  5. Like
    Schrullenhaft got a reaction from Badger73 in Irratic Framerate Issue   
    I ran the same scenarios as Hister using my system with the following specs:
    AMD FX 8320 3.5GHz 8-core (4 modules totaling 8 integer, 4 floating point, up to 4.0GHz turbo mode)
    8GB of DDR3 1600 (CAS 9)
    MSI GeForce GTX 660 Ti  - 388.00 driver
    Asrock 880GM-LE FX motherboard (AMD 880G chipset)
    Samsung 840 EVO 250GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Using the default settings in CMBN 4.0 (Balanced/Balanced, Vsync OFF and ON, AA OFF) and in the Nvidia Control Panel I typically got about 6 FPS (measured with the latest version of FRAPS) in "Op. Linnet II a USabn UKgrnd" on the German entry side of the map (all the way to the edge) and scrolling right or left looking at the Americans in Richelle. In "The Copse" scenario it measured around 28 FPS behind the allied armored units at the start (scrolled around the map a bit).
    Messing around with Vsync (both on and off), anti-aliasing, anisotropic filtering, Process Lasso (affinity, etc.), power saving settings in Windows control panel, etc. didn't seem to have a significant performance effect on the low FPS of 'Op. Linnet II...'. I overclocked the FX 8320 to 4.0GHz (simply using the multipliers in the BIOS and turning off several power saving features there too, such as APM, AMD Turbo Core Technology, CPU Thermal Throttle, etc.). With 'Op. Linnet II...' the FPS increased to only 7 FPS. Turning off the icons (Alt-I) did bump up the FPS by 1 additional frame (the option reduced the number of objects to be drawn in this view) to 8 FPS.
    There are some Hotfixes from Microsoft that supposedly address some issues with the Bulldozer/Piledriver architecture and Windows 7 involving CPU scheduling and power policies (KB2645594 and KB246060) that do NOT come through Windows Update (you have to request them from Microsoft). I have NOT applied these patches to see if they would make a difference since they CANNOT have their changes removed (supposedly), even if you uninstall them. A number of users on various forums have stated that the changes made little difference to their particular game's performance.
    I decided to compare this to an Intel system that was somewhat similar:
    Intel Core i5 4690K 3.5GHz 4-core  (possibly running at 3.7 to 3.9GHz in turbo mode)
    16GB of DDR3-2133 (CAS 9)
    eVGA GeForce GTX 670 - 388.00 driver
    Asrock Z97 Killer motherboard (Z97 chipset)
    Crucial MX100 512GB SSD
    Windows 7 Home 64-bit SP1 (latest patches)
    Running at a resolution of 1920 x 1200.
    Again using the same settings used on the FX system with CMBN and the Nvidia Control Panel I got 10 FPS in 'Op. Linnet II...' while scrolling on the far side looking at the American forces in the town. In 'The Copse' scenario the FPS went to 40 FPS behind the allied vehicles at their start positions. The biggest difference between the GTX 660 Ti and the GeForce GTX 670 is the greater memory bandwidth of the 670 since it has a 256-bit bus compared to the 660 Ti's 192-bit memory bus. So POSSIBLY the greater GPU memory bandwidth in conjunction with the Intel i5's higher IPC (Instructions Per Cycle) efficiency and the increased system memory bandwidth (faster system RAM) resulted in the higher frame rate on the Intel system, but only by so much.
    I ran a trace of the OpenGL calls used by CMBN while running 'Op. Linnet II a USabn UKgrnd' on the FX system. This recorded all of the OpenGL calls being used in each frame. The trace SEVERELY slowed down the system during the capture (a lot of data to be written to the trace file). Examining the trace file suggests that CMBN is SEVERLY CPU BOUND in certain graphical views. This is especially true with views of a large amount of units and terrain like that in 'Op. Linnet II...'.
    What appears to be happening is that some views in large scenarios of CM involve A LOT of CPU time in issuing instructions to the video card/'frame buffer'. The CPU is spending so much time handling part of the graphics workload (which IS normal) and sending instructions to the video card on what to draw that the video card does not have a full (new) frame of data to post to the frame buffer at a rate of 60 or 30 FPS (Vsync). At 30 FPS each frame would have to be generated between the CPU and the video card within 33.3ms. Instead this is taking around 100ms on the Intel system and about 142ms on the FX system (resulting in the 10 and 7 FPS respectively). Some frames in the trace file had hundreds of thousands of instructions, some reaching near 700,000 instructions (each one is not necessarily communicated between the CPU and video card, only a fraction of them are), whereas sections where the FPS was higher might only have less than 3000 instructions being executed. The low frame rate is a direct consequence of how busy the CPU is and this can be seen with both Intel and AMD CPUs.
    So the accusation comes up, is the CM graphics engine un-optimized ? To a certain extent, it is. There are limitations on what can be done in the environment and with the OpenGL 2.x calls that are available. CM could be optimized a bit further than it is currently, but this involves a HUGE amount of time experimenting and testing. Working against this optimization effort is CM's 'free' camera movement, the huge variety, number and size of maps available and the large variety and number of units.These features make it hard to come up with optimizations that work consistently without causing other problems. Such efforts at optimization are manpower and time that Battlefront simply does not have as Steve has stated earlier. Charles could be working on this for years in attempt to get better frame rates. While this would be a 'worthy goal', it is unrealistic from a business standpoint - there is no guarantee with the amount of time spent on optimizing would result in a significantly better performing graphics engine. Other, larger developers typically have TEAMS of people working on such optimizations (which, importantly, does allow them to accomplish certain optimization tasks within certain time frames too). When CMSF was started sometime in 2004 OpenGL 2.0 was the latest specification available (with the 2.1 specification coming out before CMSF was released). Utilizing newer versions of OpenGL to potentially optimize CM's graphics engine still involves a lot of work since the newer calls available don't necessarily involve built-in optimizations over the 2.0 calls. In fact a number of OpenGL calls have been deprecated in OpenGL 3.x and later and this could result in wholesale redesigning of the graphics engine. On top of this is the issue that newer versions of OpenGL may not be supported by a number of current user's video cards (and laptops and whole Mac models on the Apple side).
    As for the difference between the GTX 550 Ti and the GTX 660 Ti that Hister is experiencing, I'm not sure what may be going on. The GTX 550 Ti is based on the 'Fermi' architecture, while the GTX 660 Ti utilizes the 'Kepler' architecture. Kepler was optimized for the way games operate compared to the Fermi architecture which had slightly better performance in the 'compute' domain (using the GPU for physics calculations or other floating point, parallelized tasks). The GTX 660 Ti should have been a significant boost in video performance over the GTX 550 Ti, though this performance difference may not be too visible in CM due to the CPU bound nature of some views. It's possible that older drivers may have treated the Fermi architecture differently or simply that older drivers may have operated differently (there are trade-offs that drivers may make in image quality for performance - and sometimes this is 'baked into' the driver and isn't touched by the usual user-accessible controls). I have a GTX 570 I could potentially test, but I would probably need to know more details about the older setup to possibly reproduce the situation and see the differences first-hand.
  6. Upvote
    Schrullenhaft got a reaction from Tarfman in Windows 10   
    I believe simply Alt-Tab'ing out of the game and then re-maximizing it may get it to run at a normal speed.
     
    In the past I had thought that there was an issue with 'process affinity' and multiple-core CPUs. However it appears that my guess may not have been accurate since part of the process of setting affinity involved alt-tab-ing out of the game. If you are curious to see if setting the affinity it makes any difference for you, then you can do so by Alt-Tab'ing out of the game, Ctrl-Alt-Del to 'Start Task Manager' (or however it is now termed with Windows 10). In here go to the 'Processes' tab and find the executable file name for the CM game you're running (such as 'Barbarossa to Berlin.exe', etc.). Right-click on this file name in the list and from the popup menu select 'Set Affinity'. With multi-core CPUs you will have all of the boxes typically checked ('<All Processors>','CPU0', 'CPU1', etc.). UNCHECK ALL of the boxes and then check just ONE of the CPU boxes ('CPU1', etc.). Click 'OK' and then close up the Task Manager and re-maximize your CM game. Again, it appears that simply Alt-Tab'ing out of the game and then re-maximizing it solves the issue, but you can experiment to see if the 'affinity' makes any further difference for you or not.
  7. Upvote
    Schrullenhaft got a reaction from Peter Panzer in Windows 10 - CM Compatibility?   
    I'm not expecting any compatibility problems with Windows 10 in-and-of-itself. The video drivers MIGHT be another issue though. Sometimes the video chip driver developers drop the ball a bit with new versions of Windows (concentrating on DirectX 12 rendering, etc.) and we MIGHT see some video (or sound) issues with the CM series under Windows 10. We probably won't know about this until Windows 10 is actually released since there is the possibility of last minute driver inclusions.
     
    The CMx2 series utilizes the OpenGL graphics API. DirectX 12 will not change anything when it comes to CM since it doesn't utilize that graphics API. Improvements in speed will only come from the video driver developers and I'm not expecting any improvements on OpenGL 1.x/2.x rendering speed with Windows 10.
     
    Something to be a bit more wary of however is the copy-protection system. From my previous post about Windows 10 compatibility in the CMRT forum:
     
    If you perform an 'in-place upgrade' from Windows 7 or 8/8.1 to Windows 10, then you MIGHT run into some issues. The copy-protection system may detect that the OS is not the same as when it was activated and give you an error ('Error: errorcode' or something to that effect) when you attempt to start the game (with the game not launching). This may not happen for everyone, but it is a very likely scenario for many of us. If that DOES happen, then you will need to contact the Helpdesk and request the 'gsClean' utility for your game/modules. Before doing so, you will want to try out EVERY ONE OF YOUR BATTLEFRONT GAMES to check to see if any of them have the error from the copy-protection system. I'd suggest rebooting your machine a few times and with each reboot attempt to launch all of your games (one at a time, of course) to see if they launch properly or give errors. If you make significant changes to your system, then you will likely need the gsClean utilities for all of your games.
     
    Each game and its modules will have a different version of gsClean for it, so you will want to be specific with your request to the Helpdesk for the gsClean utilities.
     
    For eLicense protected games (CMSF, CMA, etc.) you will want to UNLICENSE the game(s) and all of its modules BEFORE upgrading to Windows 10. Try to play the games to make sure that they have been unlicensed; you should get a dialog box requesting to activate the game/module if you've been succesful. Once you have fully finished the upgrade then you should be able to activate the game/modules. However if you plan on installing any newer drivers or making other changes to your computer you may want to hold off activation until you are done with these changes (this goes for both eLicense protected games and the newer copy-protection system).
  8. Upvote
    Schrullenhaft reacted to agusto in Is it possible to do a CPU vs CPU battle?   
    helloheywhatsnew, according to several polls we had over the past couple of years, the average age of the community is approximately 40 years. Also this:
     
     
    is by far the most agressive and disrespectful comment anyone in this thread has made yet. It is disrespectful because it generally devalueates the opinions of all people older than you as crap and it incorrectly makes the generalized assumption that people older than you are incapable of keeping pace with technological developments. Furthermore you seem to confuse your own personal taste with knowledge, disproportionally even calling it "education" to tell people what you think they should like. I am genuinely sorry that you have MS, but trying to invoke pity in order to discourage people from stateing opinions that disagree with yours is quite a dishonourable and weak rethorical tactic.
     
     
     
    Basically that is why i dont think it would be interesting (or, with other words, not worth the effort it would take to implement the feature). You can just take a look at the AI plans in the editor if you want to how it works and reacts. But watching a scripted AI fight itself is really just like c3k said it: two blind men with bulldozers fighting eachother.
  9. Upvote
    Schrullenhaft got a reaction from Redge in CM:BB and win8   
    The 'flash drive' method that Erwin is using ONLY works with the pre-2006 copies of CMBO, CMBB and CMAK. Because the copy-protection for those games is very easy to circumvent and it is NOT tied to a particular computer like eLicense would be (mid-2006 and later purchases).

    I'm not sure of the reason for the 'slide show' with CMBB on Windows 8. I ran a Demo copy of CMBB on a Windows 8 install with a Radeon and the game worked. However loading up a saved game had all of the soldiers standing at attention. Also the animation of the soldiers moving/running was off (strange foot movements/speed). Windows 8 has some interesting quirks that may make running these older games problematic. Windows 8 has some strangeness with its 'Real Time Clock' that can affect benchmarks, as mentioned in this Tom's Hardware article. This article basically looks at the effect on benchmarks, but it can potentially result in some odd behavior for the CMx1 series. However I wouldn't think such RTC issues would cause a complete slide show of the frame rate.

    The other question for users experiencing this is what video card/chip and drivers do they have installed ? Windows 7 had a limited number of files installed for DirectX 9.0c and earlier, which necessitated installing the DirectX 9.0c updater to get most (but not all, I believe) files for DirectX 9.0c and earlier (not sure what sort of effect this may have had on the CMx1 series). Windows 8 doesn't have such an updater, but it may be possible to install it. Here are some user experiences with installing DirectX 9.0c on Windows 8.

    DirectX 9.0c web updater

    DirectX 9.0c June 201 'redistributable' - the entire installer

    Windows 8 isn't listed in the System Requirements for either one. It is quite possible for NEITHER installer to work, though you may be able to overcome this by attempting to install the game in 'Safe Mode'. You may also want to install .NET 3.5 Framework (redistributable - though Windows 8 is NOT listed in the System Requirements) and here are some instructions on installing the .NET 3.5 Framework in Windows 8.


    I don't know if any of the 'Compatibility' modes (right-click on launch shortcut > 'Properties' > 'Compatibility' tab > 'Compatibility mode' near the top of tab) may help with this or not. I suspect not, but it may be worth trying for some. I would suggest the 'Windows XP Service Pack 3' mode. You may also want to run the game by right-clicking and selecting "Run as administrator" (a check box option within the 'Compatibility' tab mentioned above), though I don't think this would resolve issues with frame rate.
  10. Upvote
    Schrullenhaft reacted to db_zero in Graphics suck?!!?!?!   
    I'm always amused by these sort of topics and threads. Great entertainment value.
     
    Take a look at what you got and learn to love what you have, cause in my experience it won't change dramatically quickly.
     
    Kinda like telling a wife or girlfriend she should lose 25 pounds and go to the gym so she can be as hot looking as the babe you see at work or walking down the street. There may be perfectly valid reasons in your mind as to why it would be a good idea and they may be valid, but don't expect to get the result you want.
  11. Upvote
    Schrullenhaft got a reaction from Kineas in CM:BB and win8   
    I found something out using Windows 8 on a Phenom II X4 975 and a Radeon HD 6870. The CMBB Demo was running really slow, you could see the tanks haltingly move forward rather than moving smoothly turning turn play back.
    This MAY be due to the way that Windows 8 utilizing multi-core CPUs for programs and most Windows 8 users will probably have a multi-core CPU. If you can get the game to run on just one core, the performance may speed up (in my particular experience).

    To do this (and you have to do this every time you play the game, unless you find an utility to set the 'affinty', like Process Lasso), launch the game and go through the 2D menus to select the options for your battle, operation or QB. Once you get to the 3D screen Alt-Tab to minimize the game. Now perform a 'Ctrl+Alt+Del' and select 'Task Manager'. When the Task Manager comes up click on the 'Details' tab and look for the game executable ('Barbarossa to Berlin.exe' for CMBB) and right-click on it. Select "Set affinity" from the popup menu. Here you will be presented with a list of check boxes for all of the cores (and one to select all of them), uncheck them ALL and then select just one, perhaps the 'CPU 1' if you have a dual core CPU or 'CPU 3' if you have a quad core CPU. Click 'OK' and then close up the Task Manager. Now maximize CM and it should hopefully run a little faster.
×
×
  • Create New...