Instead of a scene graph approach (just say no), they propose the SpatialGraph (for visibility determination), the SceneTree (for animation and transform update), and the RenderQueue (filled from the SpatialGraph, just draws stuff fast). It is a division that makes much more sense than a scene graph, which tries to handle all of the above.
One of my biggest dislikes about scene graphs is how they misled us all to think that animation and transform update belongs in the renderer in an engine design. The renderer just deals with the results of these operations -- it doesn't much care how the transforms were created, it just needs them. Coupling these two things doesn't really make much sense.
If anything, animation/transform update is much more tightly coupled with physics and collision. Conceptually animation is saying "here's where I would like to go" and the physics system says "here's where you can go." It is even more intertwined than that, because the physics system has complete control over things like rigid bodies and ragdolls -- saying both where they would like to go and where they can go.
If you are doing any sort of ragdoll blending between animation and ragdoll simulation, you have a feedback loop. The animation system figures out its desired pose, and the ragdoll figures out its desired pose. There is a blending step but its not always obvious where that should go. Traditionally the animation system is responsible for blending transforms, but there's an argument that the physics simulation should do it because it knows the constraints and limitations on where the bones can be physically placed.
I haven't gotten into other interesting issues such as vehicles, which can be a blend of physical simulation (the car moving) and animation (the door being opened) similar to ragdolls, and when you add attachments to the mix (gun turret), keeping the animation and physics systems in sync can be a challenge.
I'm starting to think that animation and physics are two sides of the same coin. I'm calling the combined thing "simulation." Obviously different games are going to have different complexity of physics, and I'm not sure coupling these two things so tightly is a one size fits all thing. What I do know is that coupling animation/transform update with the renderer is almost never the right thing to do, even though there are still a large number of scene graph based rendering libraries available.
Isn't this headed towards the NaturalMotion concept where animations created by artists are munged into physics forces and constraints? It seems in this day and age of highly efficient physics resolvers that it makes sense even for games that aren't necessarily physics-heavy as a part of their design. I personally think using physics to drive things that are traditionally animated (UI elements, for example) look better when they're in motion.
ReplyDeleteI view it more as NaturalMotion fits into the overall concept, but you don't have to go that far or implement things that way if you don't need it. There's a difference between thinking of animations as applying forces on the world and actually doing it (which brings a whole host of complications). Ultimately we're just trying to make something that plays fun and looks good, so any corners we can cut we should.
ReplyDeleteThe UI elements thing is interesting - using simple 2D physics for animating things on the screen. It's certainly a way to cut down on content creation time while still having things look cool.