blackeye
Newbie
- Joined
- May 11, 2004
- Messages
- 1,022
- Reaction score
- 0
I was over at the hardocp forums and some guy posted the following message. I just want to know how much of it is BS and what can be believed. Some of the things he posted seemed more like fanboism. Just want your opinions.
-------------------------------------------------
And the X800XT PE is non existant. Gateway says they wont ship any more and CDW isn't getting any any time soon. They probably wont start showing up in any quantity for a couple of months i'd say. By then were heading into the fall refresh.
Here are just a few of the reasons the 6800u is better then the X800XT PE.
1. The 6800u has superior OpenGL performance.
2. The 6800u performs as well as the X800 in D3D.
3. The 6800u has higher performance Anti Aliasing.
4. The 6800u beats the X800XT PE the majority of the time even at high resolutions if AF is not enabled which translates to the fact the 6800u is the faster hardware because AF has less to do with hardware and more to do with optimizations.
5. The 6800u supports SM 3.0 which has been announced to be in over a dozen games just this year. It will likely be two years before we see SM 4.0 and it will be on the R600 and NV60 cores at the release of Longhorn. This leaves plenty of time for SM 3.0 to be implemented in alot more games next year.
6. The 6800u also supports UltraShadow II Technology which is used extensively in DOOM3.
7. nVidia has better drivers. ATI fanboys can argue all they want but its the truth. nVidia has better OpenGL drivers, nVidia has more releases then ATI even if they are leaked beta's, nVidia has better options in their drivers like Refresh Rate Overrides, Digital Vibrance, CoolBits, Applications Profiles, and a TON of AA options that the X800's dont support including the very first 16xS for D3D. ATI also has the worst Linux drivers in the world. Dont expect to install an ATI card and get good Linux performance. The FX 5200 is able to outperform the 9800 pro in Linux.
8. ATI uses tons of cheats in their drivers. Even though they have claimed to do Full Trilinear, they dont. They use Brilinear filtering mixed in with the Trilinear in order to keep their performance up. They also use alot of Anisotropic filtering optimizations. nVidia has given the option to turn off Tri and AF ops in the recent drivers. AF ops are turned OFF by default.
Check out these benches on X800's in UT2003 without their optimizations.
http://translate.google.com/transla...Flanguage_tools
9. DOOM 3. The X800Pro cannot compete with the 6800GT in DOOM 3. The X800XT PE is able to perform fairly well with its AF optimizations but still gets beat by the stock 6800GT.
10. DOOM 3 (yes the DOOM 3 benchmarks were THAT important).
ATI's Radeon X800 texture filtering game
http://techreport.com/etc/2004q2/filtering/index.x?pg=1
Take special note to the claims ATI makes in the PDF documents presented.
Quote:
Whatever the merits of ATI's adaptive trilinear filtering algorithm, ATI appears to have intentionally deceived members of the press, and by extension, the public, by claiming to use "full" trilinear filtering "all of the time" and recommending the use of colored mip map tools in order to verify this claim. Encouraging reviewers to make comparisons to NVIDIA products with NVIDIA's similar trilinear optimizations turned off compounded the offense. Any points ATI has scored on NVIDIA over the past couple of years as NVIDIA has been caught in driver "optimizations" and the like are, in my book, wiped out.
http://graphics.tomshardware.com/gr...timized-13.html
Quote:
Conclusion
All Filter optimizations discussed here aim to increase the performance of the graphics cards without materially reducing image quality. The word "materially" is, however, subjective - depending on the optimization used, a loss in quality is perceptible when taking a closer look. Even if the quality in screenshots is OK, a running game is often a different chapter. Annoying effects (moiré, flickering) can crop up that were not noticeable on screenshots.
In the case of graphics cards in the medium and lower price segment, the customer will certainly get added value in the filter optimizations, because "correct" filtering would slow the chips down too much. The user can play in higher resolutions or add filter effects that without the optimizations would be unplayable. The bottom line is that the customer ends up with better image quality.
It's a different story with the new enthusiast cards, such as the Radeon X800 Pro/XT and the GeForce 6800 Ultra/GT. With those cards the optimizations do not provide the customer with new added value - on the contrary. He gets a reduced image quality, although the card would actually be fast enough to deliver maximum quality at what would surely still be an excellent frame rate. We cannot escape the impression that the filter optimizations in the new top models will no longer be used ultimately to offer the customer added value, but rather solely in order to beat the competition in the benchmark tables, which are so important in the prestige category. Whether or not the customer will be ready to spend $400-$500 for this is quite another matter. NVIDIA has obviously realized this and allows true trilinear filtering as an option in its newest models. Well, it did not work in the latest v61.11 beta driver because of a bug... let's hope it indeed is a bug and will work again in the final driver release.
However, slowly but surely manufacturers are moving to the point where tolerable limits are being exceeded. "Adaptivity" or application detection prevent test applications from showing the real behavior of the card in games. The image quality in games can differ depending on the driver used or on the user. The manufacturers can therefore fiddle with the driver, depending on what performance marketing needs at a given moment. The customer's right to know what he is actually buying therefore falls by the wayside. All that is left for the media is to limp along with their educational mission. The filter tricks discussed in this article are only the well-known cases. How large the unknown quantity is cannot even be guessed.
Every manufacturer decides for itself what kind of image quality it will provide as a standard. It should, however, document the optimizations used, especially when they do not come to light in established tests, as lately seen with ATi. The solution is obvious: make it possible to switch off the optimizations. Then the customer can decide for himself where his added value lies - more FPS or maximum image quality. There is no real hope that Microsoft will act to police optimization. The WHQL tests fail to cover most of them and also can be easily evaded, read: adaptivity.
Still, the ongoing discussion also has its benefits - the buyer, and perhaps, ultimately, OEMs are being sensitized to this issue. Because the irrepressible optimization mania will surely continue. However, there are also bright spots in the picture, as demonstrated by NVIDIA's trilinear optimization. We hope to see more of the same!
And for those of you that think PS 2.0b can do pretty much everything PS 3.0 can, here are some of the major differences.
Dependent Texture Limit
2.0b = 4
3.0 = No Limit
Position Register
2.0b = none
3.0 = Yes
Executed Instructions
2.0b = 512
3.0 = 65536
Interpolated Registers
2.0b = 2+8
3.0 = 10
Intstruction Predication
2.0b = none
3.0 = Yes
Indexed Input Registers
2.0b = none
3.0 = yes
Constant Registers
2.0b = 32
3.0 = 224
Arbitrary Swizzling
2.0b = none
3.0 = yes
Gradient Instructions
2.0b = none
3.0 = yes
Loop Count Register
2.0b = none
3.0 = yes
Face Register (2-sided lighting)
2.0b = none
3.0 = yes
Dynamic Flow Control Depth
2.0b = none
3.0 = 24
Minimum Shader Precision
2.0b = FP24 (96-bit)
3.0 = FP32 (128-bit)
----------------------------------------------------
one of those links doesnt work so here it is
Link
If you want to read through the whole thread heres the link.
Link
-------------------------------------------------
And the X800XT PE is non existant. Gateway says they wont ship any more and CDW isn't getting any any time soon. They probably wont start showing up in any quantity for a couple of months i'd say. By then were heading into the fall refresh.
Here are just a few of the reasons the 6800u is better then the X800XT PE.
1. The 6800u has superior OpenGL performance.
2. The 6800u performs as well as the X800 in D3D.
3. The 6800u has higher performance Anti Aliasing.
4. The 6800u beats the X800XT PE the majority of the time even at high resolutions if AF is not enabled which translates to the fact the 6800u is the faster hardware because AF has less to do with hardware and more to do with optimizations.
5. The 6800u supports SM 3.0 which has been announced to be in over a dozen games just this year. It will likely be two years before we see SM 4.0 and it will be on the R600 and NV60 cores at the release of Longhorn. This leaves plenty of time for SM 3.0 to be implemented in alot more games next year.
6. The 6800u also supports UltraShadow II Technology which is used extensively in DOOM3.
7. nVidia has better drivers. ATI fanboys can argue all they want but its the truth. nVidia has better OpenGL drivers, nVidia has more releases then ATI even if they are leaked beta's, nVidia has better options in their drivers like Refresh Rate Overrides, Digital Vibrance, CoolBits, Applications Profiles, and a TON of AA options that the X800's dont support including the very first 16xS for D3D. ATI also has the worst Linux drivers in the world. Dont expect to install an ATI card and get good Linux performance. The FX 5200 is able to outperform the 9800 pro in Linux.
8. ATI uses tons of cheats in their drivers. Even though they have claimed to do Full Trilinear, they dont. They use Brilinear filtering mixed in with the Trilinear in order to keep their performance up. They also use alot of Anisotropic filtering optimizations. nVidia has given the option to turn off Tri and AF ops in the recent drivers. AF ops are turned OFF by default.
Check out these benches on X800's in UT2003 without their optimizations.
http://translate.google.com/transla...Flanguage_tools
9. DOOM 3. The X800Pro cannot compete with the 6800GT in DOOM 3. The X800XT PE is able to perform fairly well with its AF optimizations but still gets beat by the stock 6800GT.
10. DOOM 3 (yes the DOOM 3 benchmarks were THAT important).
ATI's Radeon X800 texture filtering game
http://techreport.com/etc/2004q2/filtering/index.x?pg=1
Take special note to the claims ATI makes in the PDF documents presented.
Quote:
Whatever the merits of ATI's adaptive trilinear filtering algorithm, ATI appears to have intentionally deceived members of the press, and by extension, the public, by claiming to use "full" trilinear filtering "all of the time" and recommending the use of colored mip map tools in order to verify this claim. Encouraging reviewers to make comparisons to NVIDIA products with NVIDIA's similar trilinear optimizations turned off compounded the offense. Any points ATI has scored on NVIDIA over the past couple of years as NVIDIA has been caught in driver "optimizations" and the like are, in my book, wiped out.
http://graphics.tomshardware.com/gr...timized-13.html
Quote:
Conclusion
All Filter optimizations discussed here aim to increase the performance of the graphics cards without materially reducing image quality. The word "materially" is, however, subjective - depending on the optimization used, a loss in quality is perceptible when taking a closer look. Even if the quality in screenshots is OK, a running game is often a different chapter. Annoying effects (moiré, flickering) can crop up that were not noticeable on screenshots.
In the case of graphics cards in the medium and lower price segment, the customer will certainly get added value in the filter optimizations, because "correct" filtering would slow the chips down too much. The user can play in higher resolutions or add filter effects that without the optimizations would be unplayable. The bottom line is that the customer ends up with better image quality.
It's a different story with the new enthusiast cards, such as the Radeon X800 Pro/XT and the GeForce 6800 Ultra/GT. With those cards the optimizations do not provide the customer with new added value - on the contrary. He gets a reduced image quality, although the card would actually be fast enough to deliver maximum quality at what would surely still be an excellent frame rate. We cannot escape the impression that the filter optimizations in the new top models will no longer be used ultimately to offer the customer added value, but rather solely in order to beat the competition in the benchmark tables, which are so important in the prestige category. Whether or not the customer will be ready to spend $400-$500 for this is quite another matter. NVIDIA has obviously realized this and allows true trilinear filtering as an option in its newest models. Well, it did not work in the latest v61.11 beta driver because of a bug... let's hope it indeed is a bug and will work again in the final driver release.
However, slowly but surely manufacturers are moving to the point where tolerable limits are being exceeded. "Adaptivity" or application detection prevent test applications from showing the real behavior of the card in games. The image quality in games can differ depending on the driver used or on the user. The manufacturers can therefore fiddle with the driver, depending on what performance marketing needs at a given moment. The customer's right to know what he is actually buying therefore falls by the wayside. All that is left for the media is to limp along with their educational mission. The filter tricks discussed in this article are only the well-known cases. How large the unknown quantity is cannot even be guessed.
Every manufacturer decides for itself what kind of image quality it will provide as a standard. It should, however, document the optimizations used, especially when they do not come to light in established tests, as lately seen with ATi. The solution is obvious: make it possible to switch off the optimizations. Then the customer can decide for himself where his added value lies - more FPS or maximum image quality. There is no real hope that Microsoft will act to police optimization. The WHQL tests fail to cover most of them and also can be easily evaded, read: adaptivity.
Still, the ongoing discussion also has its benefits - the buyer, and perhaps, ultimately, OEMs are being sensitized to this issue. Because the irrepressible optimization mania will surely continue. However, there are also bright spots in the picture, as demonstrated by NVIDIA's trilinear optimization. We hope to see more of the same!
And for those of you that think PS 2.0b can do pretty much everything PS 3.0 can, here are some of the major differences.
Dependent Texture Limit
2.0b = 4
3.0 = No Limit
Position Register
2.0b = none
3.0 = Yes
Executed Instructions
2.0b = 512
3.0 = 65536
Interpolated Registers
2.0b = 2+8
3.0 = 10
Intstruction Predication
2.0b = none
3.0 = Yes
Indexed Input Registers
2.0b = none
3.0 = yes
Constant Registers
2.0b = 32
3.0 = 224
Arbitrary Swizzling
2.0b = none
3.0 = yes
Gradient Instructions
2.0b = none
3.0 = yes
Loop Count Register
2.0b = none
3.0 = yes
Face Register (2-sided lighting)
2.0b = none
3.0 = yes
Dynamic Flow Control Depth
2.0b = none
3.0 = 24
Minimum Shader Precision
2.0b = FP24 (96-bit)
3.0 = FP32 (128-bit)
----------------------------------------------------
one of those links doesnt work so here it is
Link
If you want to read through the whole thread heres the link.
Link