GPU Physics

When ATI and NVIDIA launched their first physics initiatives in 2006, they rallied behind Havok, the physics middleware provider whose software has powered a great number of PC games this decade. Havok in turn produced Havok FX, a separate licensable middleware package that used Shader Model 3.0 for calculating physics on supported GPUs. Havok FX was released in Q2 of 2006, and if you haven't heard about it you're not alone.

So far not a single game has shipped that uses Havok FX; plenty of games have shipped using the normal Havok middleware which is entirely CPU-powered, but none with Havok FX. The only title we know of that has been announced with Havok FX support is Hellgate: London, which is due this year. However we've noticed there has been next-to-no mention of this since NVIDIA's announcement in 2006, so make of that what you will.

Why any individual developer chooses to use Havok FX or not will be a unique answer, but there are a couple of common threads that we believe explain much of the situation. The first is pure business: Havok FX costs extra to license. We're not privy to the exact fee schedule Havok charges, but it's no secret PC gaming has been on a decline - it's a bad time to be spending more if it can be avoided. Paying for Havok FX isn't going to break the bank for the large development houses, but there are other potentially cheaper options.

The second reason, and that which has the greater effect, is a slew of technical details that stem from using Havok FX. Paramount to this is what the GPU camp is calling physics is not what the rest of us would call physics with a straight face. As Havok FX was designed, the physics simulations run on the GPU are not retrievable in a practical manner, as such Havok FX is designed to be used to generate "second-order" physics. Such physics are not related to gameplay and are inserted as eye-candy. A good example of this is Ghost Recon: Advanced Warfighter, which we'll ignore was a PhysX powered title for the moment and focus on the fact that it used the PhysX hardware primarily for extra debris.

The problem with this of course is obvious, and Havok goes through a great deal of trouble in their Havok FX literature to make this clear. The extra eye-candy is nice and it's certainly an interesting solution to bypassing the problem of lots-of-little-things loading down the CPU (although Direct3D 10 has reduced the performance hit of this), but it also means that the GPU can't have any meaningful impact on gameplay. It doesn't make Havok FX entirely useless since eye-candy does serve its purpose, but it's not what most people (ourselves included) envision when we think hardware accelerated physics; we're looking for the next step in interactive physics, not more eye-candy.

There's also a secondary issue that sees little discussion, largely because it's not immediately quantifiable, and that's performance. Because Havok FX is doing its work on the GPU, shader resources being used for rendering may be getting reallocated to physics calculations, while the remainder of the resources are left to pick up the rest of the work on top of the additional work generated by Havok FX as a result of creating more eye-candy. When the majority of new titles are GPU limited, it's not hard to imagine this scenario.


A Jetway board with 3 PCIe x16 slots. We're still waiting to put them to use

Thankfully for the GPU camp, Havok isn't the only way to get some level of physics, Shader Model 4.0 introduces some new options. Besides implementing Havok FX in the form of custom code, with proper preparation the geometry shader can be used to do second-order physics like Havok. For example the Call of Juarez technology demonstration uses this technique for its water effects. That said using the geometry shader brings on the same limitations as Havok FX in not being able to retrieve the data for first-order physics.

The second, and by far more interesting use of new GPU technology is exploiting the use of GPGPU techniques to do physics calculations for games. ATI and NVIDIA provide the CTM and CUDA interfaces respectively to allow developers to write high-level code for GPUs to do computing work, and although the primary use of GPGPU technology is for the secondary market of high-performance research computing, it's possible to use this same technology with games. NVIDIA is marketing this under the Quantum Effects initiative, separating it from their early Havok-powered SLI Physics initiative.

Unfortunately the tools for all of these technologies are virtually brand new, games using GPGPU techniques are going to take some time to arrive. This would roughly be in line with the arrival of games that make serious use of DirectX10, which includes the lag period where games will need to support older hardware and hence can't take full advantage of GPGPU techniques. The biggest question here is if any developers using GPGPU techniques will end up using the GPU for first-order physics or solely second-order.

It's due to all of the above that the GPU camp has been so quiet about physics as of late. Given that the only currently commercial-ready GPU accelerated physics technology is limited to second-order physics and only one game is due to be released using said technology this year, there's simply not much to be excited about at the moment. If serious GPU accelerated physics are to arrive, it's going to be another video card upgrade away at the least.

Index PhysX
Comments Locked

32 Comments

View All Comments

  • FluffyChicken - Thursday, July 26, 2007 - link

    While it's not mass market like Gaming, there is Microsoft Robotics Studio that implements AGEIA PhysX hardware (& software ?)
    So they are trying ;-)

    Microsoft Robotics Studio targets a wide audience in an attempt to accelerate robotics development and adoption. An important part of this effort is the simulation runtime. It was immediately obvious that PC and Console gaming has paved the way when it comes to affordable, widely usable, robotics simulation. Games rely on photo-realistic visualizations with advanced physics simulation running within real time constraints. This was a perfect starting point for our effort.

    We designed the simulation runtime to be used in a variety of advanced scenarios with high demands for fidelity, visualization, and scaling. At the same time, a novice user with little to no coding experience can use simulation; developing interesting applications in a game-like environment. Our integration of the AGEIA PhysX Technologies enables us to leverage a very strong physics simulation product that is mature and constantly evolving towards features that will be invaluable to robotics. The rendering engine is based on Microsoft XNA Framework.


    So expect there to be a large surge a Dell for the 15yr olds to hook up the lego.
  • DeathBooger - Thursday, July 26, 2007 - link

    There is no need. Not to mention Epic hasn't said anything about it in over two years. If anything it would just be eye candy since Unreal Tournament 3 relies on it's online multiplayer. You can't have added interactive features only a percentage will be able utilize in a multiplayer game.

    Some Unreal Engine 3 titles are replacing the built in Ageia SDK in favor of Havok's SDK. Stranglehold and Blacksite are examples of this.
  • Bladen - Friday, July 27, 2007 - link

    Physics cards go here >

    Non physics cards go there <
  • Schrag4 - Thursday, July 26, 2007 - link

    My friends and I have had this 'chicken and egg' discussion on many occassions, specifically about why physics hardware is not taking off. As long as a game only uses the physics for eye-candy, the feature won't affect gameplay at all and therefore will be able to be turned off by those who don't have the resources to play with it turned on (no PhysX card, no multiple cores, no SLI graphics, whatever). So who's gonna buy a 200-400 dollar card that's not needed?

    In order for hardware like PhysX to take off, there MUST be a game where the physics is up front, interactive, what makes the game fun to play, and it MUST be required. Not only that, but it better be one hell of a game, one that people just can't do without. I mean, after all, since this is the 'egg' in the chicken-egg scenario, you're basically spending 400 bucks for the game that you want to play, since there are no other games that are even worth mentioning (again, if it's just eye candy, who cares).

    If you don't believe me about the eye-candy comments (about how eye-candy has its place but is over-valued), then please explain to me why the Wii is outselling its direct competition? It's because the games are FUN (mostly because of the innovative interface), not because they look great (they don't). I mean, come on, who cares what a game looks like if it's tedious and frustrating, shoot, even just boring to play.

    What we're longing for is a game where there are no more canned animations for everything. For instance, you don't press a fire button to swing a sword. You somehow define a sword stroke that's different every time you swing. Also, whether or not you hit your target should not be defined by your distance from your target. It should be defined by the strength of the joints that make up your character, along with the mass of the sword, along with the mass of whatever gets in the way of your swing, etc etc. We're actually working on such a game. It's early in the development, and we don't plan on having anything beyond what can be played at LAN parties, but it's a dream we all share and maybe, just maybe, we can eek out something interesting. FYI, we are using the PhysX SDK...
  • Myrandex - Friday, July 27, 2007 - link

    UT3 should use physX for environments and not just features. Reading the article shows that PhysX can be done in s/w. That way, everyone can pay the same game, and join the same servers, etc., but if they are running on an older system, PhysX will just eat their CPU's resources completely. If they upgrade to 64 core 256bit CPUs, then it will run nice, or if they pop in a little PCI card, it will run nice.

    Either way it is definite that the game has be be revolutionary, good, and always have PhysX running for at least the enrivonmental aspects (maybe leave it as an option for Particle physics so they can get performance back some how for playing on their Compy 486).
  • AttitudeAdjuster - Thursday, July 26, 2007 - link

    The issue of getting access to the results of any calculation performed on a GPU is a mjor one. On that subject you might be interested to look at the preprint of a scientific paper regarding using multiple GPUs to perform real physical (not game-related physics) calculations using nVidia CUDA SDK. The preprint is by http://arxiv.org/abs/0707.2991">Schive et alSchive et al (astro-ph/0707.2991), at the arXiv.org physics preprint server.
  • Warder45 - Thursday, July 26, 2007 - link

    I wonder what that new Lucasarts game Fracture(I think) is using for the deformable terrain.
  • jackylman - Thursday, July 26, 2007 - link

    Typo in the last paragraph of Page 3:
    "...if the PhysX hardware is going to take of or not..."
  • Sulphademus - Friday, July 27, 2007 - link

    " We except Ageia will be hanging on for dear life until then."

    Also page 3. I except you mean expect.
  • Regs - Thursday, July 26, 2007 - link

    I would think AMD would be pushing more physics by using a co-processor. Why not Aegia team up with AMD to make one for games and sell a AMD CPU bundled with the co-processor for gamers? I think that will be a lesser risk then making a completely independent card for it.

Log in

Don't have an account? Sign up now