How much of this is BS

blackeye

Newbie
Joined
May 11, 2004
Messages
1,022
Reaction score
0
I was over at the hardocp forums and some guy posted the following message. I just want to know how much of it is BS and what can be believed. Some of the things he posted seemed more like fanboism. Just want your opinions.

-------------------------------------------------

And the X800XT PE is non existant. Gateway says they wont ship any more and CDW isn't getting any any time soon. They probably wont start showing up in any quantity for a couple of months i'd say. By then were heading into the fall refresh.

Here are just a few of the reasons the 6800u is better then the X800XT PE.

1. The 6800u has superior OpenGL performance.

2. The 6800u performs as well as the X800 in D3D.

3. The 6800u has higher performance Anti Aliasing.

4. The 6800u beats the X800XT PE the majority of the time even at high resolutions if AF is not enabled which translates to the fact the 6800u is the faster hardware because AF has less to do with hardware and more to do with optimizations.

5. The 6800u supports SM 3.0 which has been announced to be in over a dozen games just this year. It will likely be two years before we see SM 4.0 and it will be on the R600 and NV60 cores at the release of Longhorn. This leaves plenty of time for SM 3.0 to be implemented in alot more games next year.

6. The 6800u also supports UltraShadow II Technology which is used extensively in DOOM3.

7. nVidia has better drivers. ATI fanboys can argue all they want but its the truth. nVidia has better OpenGL drivers, nVidia has more releases then ATI even if they are leaked beta's, nVidia has better options in their drivers like Refresh Rate Overrides, Digital Vibrance, CoolBits, Applications Profiles, and a TON of AA options that the X800's dont support including the very first 16xS for D3D. ATI also has the worst Linux drivers in the world. Dont expect to install an ATI card and get good Linux performance. The FX 5200 is able to outperform the 9800 pro in Linux.

8. ATI uses tons of cheats in their drivers. Even though they have claimed to do Full Trilinear, they dont. They use Brilinear filtering mixed in with the Trilinear in order to keep their performance up. They also use alot of Anisotropic filtering optimizations. nVidia has given the option to turn off Tri and AF ops in the recent drivers. AF ops are turned OFF by default.

Check out these benches on X800's in UT2003 without their optimizations.

http://translate.google.com/transla...Flanguage_tools

9. DOOM 3. The X800Pro cannot compete with the 6800GT in DOOM 3. The X800XT PE is able to perform fairly well with its AF optimizations but still gets beat by the stock 6800GT.

10. DOOM 3 (yes the DOOM 3 benchmarks were THAT important).

ATI's Radeon X800 texture filtering game

http://techreport.com/etc/2004q2/filtering/index.x?pg=1

Take special note to the claims ATI makes in the PDF documents presented.

Quote:
Whatever the merits of ATI's adaptive trilinear filtering algorithm, ATI appears to have intentionally deceived members of the press, and by extension, the public, by claiming to use "full" trilinear filtering "all of the time" and recommending the use of colored mip map tools in order to verify this claim. Encouraging reviewers to make comparisons to NVIDIA products with NVIDIA's similar trilinear optimizations turned off compounded the offense. Any points ATI has scored on NVIDIA over the past couple of years as NVIDIA has been caught in driver "optimizations" and the like are, in my book, wiped out.


http://graphics.tomshardware.com/gr...timized-13.html

Quote:
Conclusion

All Filter optimizations discussed here aim to increase the performance of the graphics cards without materially reducing image quality. The word "materially" is, however, subjective - depending on the optimization used, a loss in quality is perceptible when taking a closer look. Even if the quality in screenshots is OK, a running game is often a different chapter. Annoying effects (moiré, flickering) can crop up that were not noticeable on screenshots.

In the case of graphics cards in the medium and lower price segment, the customer will certainly get added value in the filter optimizations, because "correct" filtering would slow the chips down too much. The user can play in higher resolutions or add filter effects that without the optimizations would be unplayable. The bottom line is that the customer ends up with better image quality.

It's a different story with the new enthusiast cards, such as the Radeon X800 Pro/XT and the GeForce 6800 Ultra/GT. With those cards the optimizations do not provide the customer with new added value - on the contrary. He gets a reduced image quality, although the card would actually be fast enough to deliver maximum quality at what would surely still be an excellent frame rate. We cannot escape the impression that the filter optimizations in the new top models will no longer be used ultimately to offer the customer added value, but rather solely in order to beat the competition in the benchmark tables, which are so important in the prestige category. Whether or not the customer will be ready to spend $400-$500 for this is quite another matter. NVIDIA has obviously realized this and allows true trilinear filtering as an option in its newest models. Well, it did not work in the latest v61.11 beta driver because of a bug... let's hope it indeed is a bug and will work again in the final driver release.

However, slowly but surely manufacturers are moving to the point where tolerable limits are being exceeded. "Adaptivity" or application detection prevent test applications from showing the real behavior of the card in games. The image quality in games can differ depending on the driver used or on the user. The manufacturers can therefore fiddle with the driver, depending on what performance marketing needs at a given moment. The customer's right to know what he is actually buying therefore falls by the wayside. All that is left for the media is to limp along with their educational mission. The filter tricks discussed in this article are only the well-known cases. How large the unknown quantity is cannot even be guessed.

Every manufacturer decides for itself what kind of image quality it will provide as a standard. It should, however, document the optimizations used, especially when they do not come to light in established tests, as lately seen with ATi. The solution is obvious: make it possible to switch off the optimizations. Then the customer can decide for himself where his added value lies - more FPS or maximum image quality. There is no real hope that Microsoft will act to police optimization. The WHQL tests fail to cover most of them and also can be easily evaded, read: adaptivity.

Still, the ongoing discussion also has its benefits - the buyer, and perhaps, ultimately, OEMs are being sensitized to this issue. Because the irrepressible optimization mania will surely continue. However, there are also bright spots in the picture, as demonstrated by NVIDIA's trilinear optimization. We hope to see more of the same!

And for those of you that think PS 2.0b can do pretty much everything PS 3.0 can, here are some of the major differences.

Dependent Texture Limit
2.0b = 4
3.0 = No Limit

Position Register
2.0b = none
3.0 = Yes

Executed Instructions
2.0b = 512
3.0 = 65536

Interpolated Registers
2.0b = 2+8
3.0 = 10

Intstruction Predication
2.0b = none
3.0 = Yes

Indexed Input Registers
2.0b = none
3.0 = yes

Constant Registers
2.0b = 32
3.0 = 224

Arbitrary Swizzling
2.0b = none
3.0 = yes

Gradient Instructions
2.0b = none
3.0 = yes

Loop Count Register
2.0b = none
3.0 = yes

Face Register (2-sided lighting)
2.0b = none
3.0 = yes

Dynamic Flow Control Depth
2.0b = none
3.0 = 24

Minimum Shader Precision
2.0b = FP24 (96-bit)
3.0 = FP32 (128-bit)

----------------------------------------------------

one of those links doesnt work so here it is
Link

If you want to read through the whole thread heres the link.
Link
 
I'd say he likes driving with blinders on.
 
what you mean?

This is what I was wondering about the most. But I still think the x800's win even without AA/AF. Im trying to decide if this is BS or not.
The 6800u beats the X800XT PE the majority of the time even at high resolutions if AF is not enabled which translates to the fact the 6800u is the faster hardware because AF has less to do with hardware and more to do with optimizations.
 
actually provide some proof man that doesnt mean anything when you say that. You should read through that entire thread. Some of the post on that thread make me wonder if its just fanboism or if its true. Read through it if you have the time.
 
The man who created the original thread did talk about D3 though.

What he has posted about is based on the updated Nvidia drivers vs ATI's current drivers. He should really wait for the 4.8's to come out before he opens his mouth or he should base it off the original drivers.
A lot of what he says is true but a lot of it is not.
 
Everyone is saying wait for the 4.8's to come out but really what's going to change. Sure it will run faster but buy then nvidia will have released a new set of drivers.
Anyway what is ati changing in the 4.8's?
 
ARgghh, never speak the name of the evil "N" in my presense, now go, an d purchase an ati card to cleanse ur sould of any sin that has been caste on u cause of doubt.
 
Gajdycz said:
ARgghh, never speak the name of the evil "N" in my presense, now go, an d purchase an ati card to cleanse ur sould of any sin that has been caste on u cause of doubt.

I wouldn't say that, Nvidia has really redeemed themselves with the 6800 so far.
 
ARgghh, never speak the name of the evil "N" in my presense, now go, an d purchase an ati card to cleanse ur sould of any sin that has been caste on u cause of doubt.
We dont need to listen to your ingnorent fanboism
 
Im not sure. It sounds like your saying the 6800 u is faster than the x800xt, but I saw that the specs for the x800xt are faster than the 6800 u (both clockspeed and mem).
 
And the points about SM3 are moot.

Executed Instructions
2.0b = 512
3.0 = 65536

Yeh shame most shaders at the moment don't go over about 50-100 instructions
 
1. The 6800u has superior OpenGL performance.

True. Nvidia have had superior openGL support for years although rumour says that ATI are re-writing thier openGL drivers from scratch to stay competitive on this frontalthough dont expect to see them appear this side of christmas.

2. The 6800u performs as well as the X800 in D3D.

Only in a few games. Most of the time the X800xt takes the lead over the 6800u in most game benchmarks.

3. The 6800u has higher performance Anti Aliasing.

True but only by a very small margin indeed.

4. The 6800u beats the X800XT PE the majority of the time even at high resolutions if AF is not enabled which translates to the fact the 6800u is the faster hardware because AF has less to do with hardware and more to do with optimizations.

The X800 still leads in most game bechmarks in this area although he is right about the ATI card bieng much faster with AF turned on as do use the AF optimisations as metioned in his post. Whilst many would consider them a cheat personally i cant tell the difference visually between the two!

5. The 6800u supports SM 3.0 which has been announced to be in over a dozen games just this year. It will likely be two years before we see SM 4.0 and it will be on the R600 and NV60 cores at the release of Longhorn. This leaves plenty of time for SM 3.0 to be implemented in alot more games next year.

True although the R500 will support SM3.0 way before that.

6. The 6800u also supports UltraShadow II Technology which is used extensively in DOOM3.

True and this is one reason why the Nvidia cards benchmark faster than the ATI cards on doom 3 as the Nvidia cards with ultrashadow are up to 4 times more efficient at rendering shadows than cards without it. Ultrashadow has been in nvidia cards since the FX5800 appeared so Id have had plenty of time to work with it.

7. nVidia has better drivers. ATI fanboys can argue all they want but its the truth. nVidia has better OpenGL drivers, nVidia has more releases then ATI even if they are leaked beta's, nVidia has better options in their drivers like Refresh Rate Overrides, Digital Vibrance, CoolBits, Applications Profiles, and a TON of AA options that the X800's dont support including the very first 16xS for D3D. ATI also has the worst Linux drivers in the world. Dont expect to install an ATI card and get good Linux performance. The FX 5200 is able to outperform the 9800 pro in Linux.

I would agree personally that Nvidia cards do have better driver supprt although the picture that the poster paints is that the ATI cards have terrible drivers which they dont! ATI's drivers over the past 18 months or so have improved greatly from the drivers that gave ATI a bad reputation.

8. ATI uses tons of cheats in their drivers. Even though they have claimed to do Full Trilinear, they dont. They use Brilinear filtering mixed in with the Trilinear in order to keep their performance up. They also use alot of Anisotropic filtering optimizations. nVidia has given the option to turn off Tri and AF ops in the recent drivers. AF ops are turned OFF by default.

BOTH ati and nvidia 'cheat' in there drivers... they always have... they always will! As mentioned before i cant tell the difference with this supposed ATI cheat.

9. DOOM 3. The X800Pro cannot compete with the 6800GT in DOOM 3. The X800XT PE is able to perform fairly well with its AF optimizations but still gets beat by the stock 6800GT.

True. We've seen the benchmarks but no doubt ATI will improve thier performance in the doom 3 engine given time to work on openGL supprt.
 
Back
Top