GPU Physics?

Miker75

Newbie
Joined
Jul 13, 2004
Messages
62
Reaction score
0
Does anyone know if Ep 2/TF2/Portal etc. are going to be able to use the GPU physics that Havok is able to use now?

I've got a spare X1600 now that I've upgraded to a 2900XT..
 
My guess is no. GPU physics (that arn't DX10) have very high latency times. So under Dx9 and the such they can only be used for "pretty effects". In DX10 the GPU is more independent from the CPU makin less latency, allowing GPU physics to interact with gameplay. Valve probably wants a little more than pretty effects from there physics right now.

That and Valve has worked there assess off for Multi-Core processors. So I'm guessing there just gonna utilize multi-core CPU's.


Plus if you have a 2900XT and you don't have multi-core processor, then i'm guessing you have a bottleneck somewhere.
 
man, do have a single core processor and a 2900xt is completely screwed... the performance will be the same as a 7800...
 
I'd say it's Havok that worked the multi-cpu code... their website says that they've made updates to support multi-cpu's for physics calculations. I'm assuming Valve have just used the updated code for Ep2.

And yes I do have a dual-core CPU :)
 
Why would you do the physics calculations on the GPU? Surely the CPU (especially multi-cored ones) are a much better option.
 
I'd say it's Havok that worked the multi-cpu code... their website says that they've made updates to support multi-cpu's for physics calculations. I'm assuming Valve have just used the updated code for Ep2.

And yes I do have a dual-core CPU :)

Nope. Valve added their own multi-core solution to the Source engine, which will not only apply to physics, but also particle effects and AI. Which you would know if you read tech articles. :)
 
Nope. Valve added their own multi-core solution to the Source engine, which will not only apply to physics, but also particle effects and AI. Which you would know if you read tech articles. :)

I do read tech articles thanks. And why would Valve go out of their way to add multicore support to the Havok engine (which they use), when Havok already have???

I'm not talking about the AI in this article my friend.. I'm sure they've made the Source engine support multicore CPU's, however they use an outside vendor for physics.. much like every other game developer..
 
Why would you do the physics calculations on the GPU? Surely the CPU (especially multi-cored ones) are a much better option.

Well, according to ATI, they can get their X1900 (the best at the time of the article), to calculate physics about 9x faster than the PhysX hardware physics card. They also mentioned that the X1600 (a rather cheap card these days) was able to outbeat the PhysX card as well. But it's yet to be seen if it can be used effectively in a game (or at all! I haven't seen it released yet)..
 
I do read tech articles thanks. And why would Valve go out of their way to add multicore support to the Havok engine (which they use), when Havok already have???

I'm not talking about the AI in this article my friend.. I'm sure they've made the Source engine support multicore CPU's, however they use an outside vendor for physics.. much like every other game developer..

It would be a harder job to take multithreaded Havok, rip it's low-level multithreaded guts out, fix all the high-level things you broke by doing so, then get it to work with the Source engine's multithreading model than it would be to adapt the Source engine's current physics (modified single-threaded Havok) to the Source's multithreading model. Especially since the coders will be doing the same thing to many other of Source's current components.

Also, AI was only mentioned as something else they were parallelizing. They're giving most of the major subsystems of the engine this big overhaul, so it's reasonable to mention. If they were keeping the rest of Source single-threaded and just parallelizing the physics system then, yes, it would probably be easier to use Havok's upgrade. However, since they're multithreading most of the engine (and doing the scheduling themselves), it makes sense to do it themselves to give them maximum flexability to ensure that all the systems "play nice" together.

It's likely that Havok is maintaining a single-core codepath in addition to it's new multicore version, since thread scheduling creates unwanted overhead on a single-core system. If this is the case, it's also likely that the high-level abstractions of physics objects are the same in the single- and multi-core versions, so if Valve wanted to get the shiny new features of Havok's current version, they probably could. About as easily as they got old physics to work with the system and updated for multi-core, anyways.
 
It would be a harder job to take multithreaded Havok, rip it's low-level multithreaded guts out, fix all the high-level things you broke by doing so, then get it to work with the Source engine's multithreading model than it would be to adapt the Source engine's current physics (modified single-threaded Havok) to the Source's multithreading model. Especially since the coders will be doing the same thing to many other of Source's current components.

Also, AI was only mentioned as something else they were parallelizing. They're giving most of the major subsystems of the engine this big overhaul, so it's reasonable to mention. If they were keeping the rest of Source single-threaded and just parallelizing the physics system then, yes, it would probably be easier to use Havok's upgrade. However, since they're multithreading most of the engine (and doing the scheduling themselves), it makes sense to do it themselves to give them maximum flexability to ensure that all the systems "play nice" together.

It's likely that Havok is maintaining a single-core codepath in addition to it's new multicore version, since thread scheduling creates unwanted overhead on a single-core system. If this is the case, it's also likely that the high-level abstractions of physics objects are the same in the single- and multi-core versions, so if Valve wanted to get the shiny new features of Havok's current version, they probably could. About as easily as they got old physics to work with the system and updated for multi-core, anyways.

Why would you want to "rip it's low-level multithreaded guts out" anyway?
I'm pretty sure that Havok has written their code with synchronizing it with developers code in mind. Otherwise, if it is as you said, it would be useless and every developer would need to "rip it out"..

And as far as Valve keeping a "single core" version of their code... I doubt it.. the single core systems can run the multi-threaded code fine, they'll just take a (very marginal) performance hit... a fair trade off for increased performance on multi-core systems.
 
Why would you want to "rip it's low-level multithreaded guts out" anyway?
I'm pretty sure that Havok has written their code with synchronizing it with developers code in mind. Otherwise, if it is as you said, it would be useless and every developer would need to "rip it out"..

And as far as Valve keeping a "single core" version of their code... I doubt it.. the single core systems can run the multi-threaded code fine, they'll just take a (very marginal) performance hit... a fair trade off for increased performance on multi-core systems.

I don't remember Valve switching to the later Havok software. Didn't they just modify the earlier Havok code they had to add stuff like cinematic physics?
 
Why would you want to "rip it's low-level multithreaded guts out" anyway?
I'm pretty sure that Havok has written their code with synchronizing it with developers code in mind. Otherwise, if it is as you said, it would be useless and every developer would need to "rip it out"..

Valve's approach, as I understand it, is to have a central thread that handles load-balancing and scheduling across all systems, using lock-free algorithms, giving them enormous control of how the system runs. If Havok is taking a different approach (which is likely), such as offloading stuff to the OS, it may not fit in very well with Valve's framework, and the low-level nitty gritty would have to be rewritten.

And as far as Valve keeping a "single core" version of their code... I doubt it.. the single core systems can run the multi-threaded code fine, they'll just take a (very marginal) performance hit... a fair trade off for increased performance on multi-core systems.

I said Havok was likely keeping a single core version, not Valve. And in game programming, if they can get a few more frames per second by running a single-threaded codepath on a single-core processor, they'll likely do so. At least until almost everybody has a multi-core system.
 
Valve's code is based off of Havok 1. They completely revamped Havok 1 so it's featureset was more like Havok 2. This was back before HL2 was released. So the code base behind the latest Havok's and Valves Havok is very very different by now.

Also Valve wants there own multi-threaded system. They ported Source to 3 or 4 different styles of multi-threading to find what one worked the best, was most flexible, and provided the best speed.
 
So did HL2 and EP1 not support multi-core processors? Because if they didn't, that means I'm going to see a cool performance increase with EP2. And I'm already at high settings with my lappy's x1600.
 
Multicore wasn't supported with HL2 and EP1 were released. The entire Source engine will (or has? I can't remember) be updated for Multicore, so any Source game will take advantage of it.
 
i think i must be typing something wrong or something stupid. but is there hyper threading support or not!!!!?????
 
i think i must be typing something wrong or something stupid. but is there hyper threading support or not!!!!?????

Well, hyper-threading makes one physical processor appear as two logical processors, so anything that takes advantage of multiple logical processors (like the multi-threaded Source engine, for example) "supports" hyper-threading, even if it doesn't realize that two threads being run on different logical processors may be running on the same logical processor.

Some (not many) applications have more explicit support for hyper-threading, so that in a system with, say, 4 physical hyper-threaded cores appearing as 8 logical processors, the scheduler won't put the 2 most active threads (or ones involving real-time constraints or user interactions) on the same physical processor while other physical processors have lighter loads. I could also imagine situations when a smart scheduler could see two threads would run well together with a shared cache, so it could prefer to put two such threads on the some physical processor.

I don't know if the Source engine has this kind of explicit support for hyper-threading. On the one hand, Valve are doing what they're doing with the goal of making the engine work well of 4- to 8- (and more) core systems. On the other hand, hyper-threading is being phased out while multi-core systems are being phased in, so I don't think it'll ever make sense for them to go out of their way to add it. Very very few users will have hyper-threaded multiprocessor (i.e. at least 4 logical processors) gaming rigs, so it's probably not worth the effort of supporting it.

But that's not a bad thing, assuming you have a hyper-threaded system. The multi-threaded Source engine should run better on a hyper-threaded system than the single-threaded engine did. Unless you have physical multiprocessors, the issue of explicit support won't make any difference.
 
Back
Top