Jump to content

Best $75 Graphics Card?


Recommended Posts

Okay, the machine here at work needs an upgrade from the Radeon 8500 that's in here now, to do justice to a certain piece of software near and dear to us lol.

So, since this is coming out of my own pocket, what's the best nVidia card I can get for that amount? I'm also upgrading some other stuff, so I really don't want to exceed that price...although I will if I'm only $5 away from MUCH better performance.

Although I've never had any problems with my ATI cards, I'm going nVidea this time around...so, GeForce gurus, could you please compare the performance of what you're suggesting to the GeForce 4 Ti4200, which is my only frame of reference? Thanks!

Link to comment
Share on other sites

In the US$75 range you're probably looking at a GeForce FX 5500 w/ 128Mb of VRAM. I typically shop at Newegg, who have fast shipping, great service and very reasonable prices and shipping on most items.

To get a listing for the FX 5500 at Newegg go to "Shop by Category" in the blue bar above > Select "Video Cards" in the lower right hand corner of all the items listed > in Advanced Search on the left hand side select from the 'Chipset' category the 'NVidia Geforce FX5500'. The Gigabyte GV-N55128DP for $75.50 has the best memory/GPU specs, though that data may not always be the most accurate (especially if you're considering overclocking the videocard).

Regarding performance, the FX 5500 should be a little better than the GeForce 4 Ti 4200, but not by a whole lot (especially in CM). Here's some benchmarks from Tom's Hardware where the performance of an FX 5500 isn't specifically listed, but it should be approximately between a GeForce FX5600 and a GeForce FX5200 Ultra. You can browse further pages in the article near the bottom. These benchmarks may not be very applicable to CM since they are often benchmarking graphics effects that CM doesn't support. But this should give you an idea of the performance differences and also what to possibly expect performance-wise for CMx2.

[ January 06, 2005, 07:34 AM: Message edited by: Schrullenhaft ]

Link to comment
Share on other sites

Thanks...I used to be up on all of this, but as soon as NVidia followed ATI's lead in their deceptive product numbering convention, I gave up lol. The other problem I've found, especially on pricewatch, is you don't always get what you think you're getting. Every time I click on a Radeon 9600 Pro, for example, the listings that come up are all for 9600SEs...which are by no means the same card.

On pricewatch, they've got a 4 Ti 4800 for $82...which is a little better performance than the 4200, which I have in one of my systems...but I'll definitely check out the link to Tom's to see how they all stack up.

EDIT: WOW! Looks like the 4 Ti series is the lowest of the low hehe. Just about anything will be better than that.

EDIT 2: redface.gif wtf...what's the difference between a 5700, 5700LE, 5700EP, and 5700XD, which are all available at Newegg.com when I search the 5700 chipset. NOW I remember why I gave up trying to stay current on all this.

[ January 06, 2005, 07:57 AM: Message edited by: teotwawki1 ]

Link to comment
Share on other sites

Originally posted by teotwawki1:

EDIT: WOW! Looks like the 4 Ti series is the lowest of the low hehe. Just about anything will be better than that.

Wait, don't shoot so fast. A GeForce Ti 4x00 is faster than the 5200, 5500 and some 5700, IF your game doesn't use hardware features that only the 5xx0 series has.

You see, for basic 3D rendering the 4xx0 are (much) faster than the low-level 5xx0. But the 5xx0 do have hardware support for some advanced 3D features, most notably shaders.

But a game like CM doesn't use any of these, so it runs better on the 4xx0. The exception is antialiasing which also runs faster on 5xx0 but you can always replace AA with more resolution on the faster card, so to speak (if your monitor supports it).

EDIT 2: redface.gif wtf...what's the difference between a 5700, 5700LE, 5700EP, and 5700XD, which are all available at Newegg.com when I search the 5700 chipset. NOW I remember why I gave up trying to stay current on all this.

Forget the marketing product numbers, use this chart:

http://www.anandtech.com/video/showdoc.aspx?i=2195&p=3

What you want first of all is the most memory bandwidth (memory clock multiplied by memory width). Just get the card with the most memory bandwidth you can afford, that'll be perfect for CM.

Link to comment
Share on other sites

Okay, let's suppose I'm looking at the GeForce 5700...I pop over to pricewatch...click on that listing, and that takes me to a 5700LE...now, iirc, LE is a neutered/castrated version of the card. And when I click on 5600, all I see is the XT...once again, the clock speed of those is not the same. And should I be looking at core clock speed, or memory? sorry, but I'm an idiot when it comes to this stuff.

Where the **** can I get some truth in advertising lol? At this point, I'll settle for an FX5600, IF I can find a real one (325mzh). NewEgg has the XFX for $99...a little more than I wanted, but what the hey, it's only money. Is that my best bet? If you know someplace reputable that doesn't play games with their products, let me know...I'm absolutely open to suggestions.

Link to comment
Share on other sites

The most important thing is memory bandwidth, which is memory clockspeed multiplied by memory width (64, 128 or 256 bits).

Forget about GPU core clock speed unless your sole intention is to plug away at newest games at very low quality settings.

For a game like CM which is just doing simple things, but a lot of them, memory is decisive. After that, the number of polygons which can be pushed which is proportional with number of vertex pipelines multiplied by core speed. If you run very high resolutions you also want (besides memory bandwidth) number of pixel pipelines multiplied by core speed.

If you don't want to both with this, just use memory bandwidth as indicated.

And notice this is for older games and games like CM. It makes a huge difference what kinds of games you want to run.

Link to comment
Share on other sites

OK, if I may continue this thread a bit here, if you have a high end processor with lots of RAM and a fast bus, do you believe that a higher end graphics card with more VRAM (such as a GeForce 6800) will make a significant difference in CM when compared with a low end 32 meg VRAM card? I'm thinking of putting together a new machine, so this is why I ask.

Thanks

Glenn

Link to comment
Share on other sites

Canuckgd - yes, there should be a significant difference between an Nvidia GeForce 6800 series with 256Mb of VRAM and the Matrox G550 32Mb AGP cards.

The Matrox card has a GPU core clock speed of 125MHz, compared to the GeForce 6800 series having around 325MHz to start. And that comparison would only be legitimate if both were the same core. The Nvidia core has much more to it in terms of pipelines than the Matrox G550 does. The GF 6800 has 8-16 pixel pipelines, while the G550 only as 2. In fact the closest competition the G550 has performance-wise is the GeForce 2 & 4 MX series.

The next spec is the memory clock which has the GeForce 6800 running around 700MHz (DDR of 350MHz) to start, while the G550 is only 333MHz (DDR of 166MHz). The memory bus width is also significantly different with the G550 only supporting 64-bit, while the GF 6800 has 256-bit. The RAMDAC of the G550 is 360MHz, while the GF 6800 is 400MHz. This basically makes a difference in how high a resolution and refresh rate the videocard can support (and at these frequencies it is unlikely most people would take advantage of the higher resolutions that GF 6800 series can offer).

All of this additional performance for the GF 6800 means that it can use image quality enhancements such as anti-aliasing, anisotropic filtering, etc. that the G550 is unable to perform, while still easily maintaining or usually beating any 3D performance that the G550 offers. The GF 6800 series basically offers 8X the memory bandwidth, which is a good approximation as to how much faster the GF 6800 is compared to the G550.

There's a huge gaming performance difference between the two boards in 3D (and a huge price difference too). The Matrox cards really excel as 2D display devices with good 2D image quality. In fact the 2D performance of the GF 6800 most likely isn't significantly better than the G550, while the image quality of the G550 was often better than a lot of earlier GeForce families.

[ January 06, 2005, 12:08 PM: Message edited by: Schrullenhaft ]

Link to comment
Share on other sites

Okay...after looking at the scores, especially for games I already own, I went ahead and got the 4 Ti4200. After 4 hours of looking at/comparing video cards, and then trying to figure out who had what, I reverted to the "tried and true". I was very tempted in the middle of this process to don rubber boots, put a white handkerchief with twisted corners on my head, and shout "my brain 'urts".

The most frustrating part was the fact that pricewatch lets the retailers get away with listing a 5600LE as a 5600. While *technically* correct, this really makes it tough to find exactly what you're looking for.

Link to comment
Share on other sites

Schrullenhaft,

Man, thank you for this! Your explanation is really great and makes a lot of sense to me. I think my decision to get the Matrox wasn't the best in the end, particularly if the 2D performance of the NVidia is equal that of the Matrox. Such is life, and no great harm done.

I've been trying to decide whether to add a PC or simply change video cards. I think I'll do the latter for now.

Many thanks again for the assist!

Glenn

Link to comment
Share on other sites

The 9800 Pro doesn't need as much power as some of the upper end GeForce cards...I have a GeForce 4 Ti 4200 that wouldn't run right in a machine w/ a 300w power supply, for instance.

How much power going to the 12v, 5v, and 3.3v rails of the PS is more important than overall power...a good 300w Enermax may deliver enough power to that portion of the PS to power devices that a really cheap 350w or even 400w can't.

I've already forgotten what # you're trying to hit, but there needs to be a certain amount of watts to each of those 3 rails. Multiply the stated amps for each rail times the voltage to get the watts to each of the rails, then add them up. Of course, w/o me being able to remember the "recommended" minimum wattage to each of those rails, we're SOL hehe.

I don't even remember where I heard this being discussed, most likey amdmb.com...but it's one of those things that's stuck in my mind. I'm sure a little google action will turn up the right info.

Link to comment
Share on other sites

How much power going to the 12v, 5v, and 3.3v rails of the PS is more important than overall power...a good 300w Enermax may deliver enough power to that portion of the PS to power devices that a really cheap 350w or even 400w can't.

I've already forgotten what # you're trying to hit, but there needs to be a certain amount of watts to each of those 3 rails. Multiply the stated amps for each rail times the voltage to get the watts to each of the rails, then add them up. Of course, w/o me being able to remember the "recommended" minimum wattage to each of those rails, we're SOL hehe.

Yeah, tell me about it ;) . I'm an Electronics (Avionics) Tech :D . I haven't looked at the specs for those, but basically if you start pulling on the power supply to the point where voltages are dropping because current is above the max, then you have a problem (which is what you stated). I suspect I'm going for a new machine outright, built to take the high end stuff. Specs are nice on it, and there's nothing cheap in it, so it should be fun. Will go with the NVidia 6800 GT card. Hopefully that will allow me to run CMAK & CMBB (and CMX2 when it's out) in a higher resolution without the slide-show effect. I'll let you know ;) .
Link to comment
Share on other sites

From my powermeter I can see that my 5900Xt doesn't take more power than my Ti 4400, they are about equal, except the newer card takes less power when idle.

And that is although the 5900XT has an extra power connector and the 4400 doesn't. The requirement for the extra connector is a bad indicator for the actual power requirement. The extra connector might only be required to get a 12V line which isn't in the AGP slot, but it doesn't tell you how much it draws.

So there's hope that they put a stop on it.

Link to comment
Share on other sites

  • 2 weeks later...
Originally posted by Denwad:

Redwolf

Nvidia says that the 6x00 series ( GT and above ) REQUIRE a independent large 12v molex for each connection on the card

the only other thing those 12v lines can be used for is fans.

{EDIT} they are putting a stop on it, by migrating to PCI-Express

Not sure what your point is.

What I was saying is that under load these cards draw a lot of power, but modern cards actually draw pretty few power when under 2D only.

And what I said about the connector is that my 5900XT, although it has a connector, draws less than my 4400 which doesn't.

The presense of a connector is at best an indicator about max power under load.

And moving to PCI-e gets rid of the connector but power usage stays the same. And the poort mainboard have to carry the load.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...