I'm not sure why, but Microsoft seems intent on crippling XNA for the 360. Perhaps they want to sell more dev kits.
I recently had some more time to work on my little toy project. After some work, I've now got a deferred lighting implementation on the PC.
For the lighting buffer construction, at first I was using a tiled approach similar to Uncharted, which did not require blending during the lighting stage. It did work for the most part, and allowed me to use LogLUV for encoding the lighting information, which was faster. But it had issues - I didn't have any lighting target ping-ponging set up, so I was stuck with a fixed limit of seven lights per tile. Also, even with smallish tiles, you end up doing a lot of work on pixels not actually affected by the lights in question. So I wanted to compare it to a straightforward blending approach, and switched back to an FP16 target, and render the light volumes directly (using the stencil approach detailed in ShaderX7's Light Pre-Pass article).
So this all worked great and my little toy is rendering 100 lights. Of course, on the 360, there's a problem. Microsoft, in its infinite wisdom, decided that the FP10 buffer format on 360 would blow people's minds and it is not supported in XNA. They are using an actual FP16 target, which does not support blending.
So I guess it is going to be back to alternate lighting buffer encoding schemes, bucketing, render target ping-ponging for me. It's not a huge deal, but it is frustrating.
It is a real shame that XNA gives the impression that the 360 GPU is crippled, when in reality it is anything but. Couple lack of FP10 support with inability to sample the z-buffer directly, and the lack of control of XNA's use of EDRAM, and they've managed to turn the 360 into a very weak, very old PC.
Least common denominator approaches generally haven't fared that well over the years. An XBLA title implemented in XNA is going to be at a fundamental disadvantage -- I don't think you are going to see anything approaching the richness of Shadow Complex, for example.
At the end of the day, Microsoft needs to figure out where they are going with XNA. If they are going to dumb it down and keep it as a toy for people who can't afford a real development kit (people who've been bumping into these low ceilings much longer than me), then they should keep on their current path.
The potential for XNA is really much more, though. Today I wrote a pretty decent menu system in about 45 minutes, that handles gamepad, keyboard, and mouse input seamlessly. I don't think I could write that in C++/DirectX anywhere near as fast. If you start looking down the road to future generations of hardware, I'm not worried about the overhead of C# being fundamentally limiting. Games today already use much less efficient scripting languages than C#, and while you are limited to the heavy lifting Microsoft has chosen to implement for you today, who is to say that a future version of XNA couldn't allow shelling out to C++ for really performance intensive stuff?
XNA has a chance to become something really great that would be very powerful for a large class of games. It remains to be seen if Microsoft will let it.
Showing posts with label xbox 360. Show all posts
Showing posts with label xbox 360. Show all posts
Sunday, August 23, 2009
Sunday, June 21, 2009
Leaky abstractions in XNA
So continuing my exploration of XNA, this weekend I did some more work on my little toy project.
The first thing I did was get it running on 360. I was happy to see that XNA seems to be able to figure out how to deal with my various render targets, including one MRT, without too much trouble, and the performance was far superior on the 360 than on my laptop. I get about 200 fps on the 360 vs 60 on the laptop.
There was one issue worth noting.
First, some background on the deferred lighting approach I am using:
So the first problem on the 360 is XNA blows away the depth buffer I lay down in step 1 by the time I get to step 3. After some searching on the internets, I discovered this is expected behavior.
I tried setting my render targets to PreserveContents, which does work, but is completely wasteful since I don't give a hoot about restoring the actual color contents of any of these buffers. This dipped performance down to 150fps.
My next attempt was to restore the depth buffer manually from my G Buffer. But this was exhibiting z-fighting, possibly due to slightly different methods of Z calculation for my G-Buffer vs the depth-buffer leading to small differences in the computed Z values. I didn't feel that messing around with z biasing would be a robust solution, so I abandoned this effort.
The solution I ended up choosing was to just clear the z buffer again and reconstruct it during step #3. Since my scenes are so simple this gets me back to just slightly under 200 fps.
It's not an ideal solution, since I had in mind some uses for a stencil buffer laid down in step #1 that would accelerate step #2 (mainly, masking off unlit pixels for the skybox).
XNA's EDRAM handling is a great example of a leaky abstraction. Only having a 10 MB EDRAM buffer does make render target management trickier, but in Microsoft's attempt to completely hide it from XNA programmers, I think they've just made things more frustrating. The concept of a limited buffer for render targets is not that hard to get your head around. You have to understand EDRAM anyway since techniques in XNA that work perfectly on Windows (like what I was doing) will break on the 360. Even worse, you have no real good idea *why* it's breaking unless you understand the limitations of EDRAM and take a guess at what Microsoft is doing under the hood. So what is really being saved here? Just let me deal with EDRAM myself.
The first thing I did was get it running on 360. I was happy to see that XNA seems to be able to figure out how to deal with my various render targets, including one MRT, without too much trouble, and the performance was far superior on the 360 than on my laptop. I get about 200 fps on the 360 vs 60 on the laptop.
There was one issue worth noting.
First, some background on the deferred lighting approach I am using:
- Render normal + depth into a G buffer for all primitives. Depth writes and tests are enabled in this step.
- Render the lights into a lighting buffer using the G buffer. Depth writes and tests are disabled for this pass.
- Apply the lighting to each primitive using the lighting from step 2 while computing albedo and (eventually) other material properties on the fly. Depth tests are enabled but not writes.
So the first problem on the 360 is XNA blows away the depth buffer I lay down in step 1 by the time I get to step 3. After some searching on the internets, I discovered this is expected behavior.
I tried setting my render targets to PreserveContents, which does work, but is completely wasteful since I don't give a hoot about restoring the actual color contents of any of these buffers. This dipped performance down to 150fps.
My next attempt was to restore the depth buffer manually from my G Buffer. But this was exhibiting z-fighting, possibly due to slightly different methods of Z calculation for my G-Buffer vs the depth-buffer leading to small differences in the computed Z values. I didn't feel that messing around with z biasing would be a robust solution, so I abandoned this effort.
The solution I ended up choosing was to just clear the z buffer again and reconstruct it during step #3. Since my scenes are so simple this gets me back to just slightly under 200 fps.
It's not an ideal solution, since I had in mind some uses for a stencil buffer laid down in step #1 that would accelerate step #2 (mainly, masking off unlit pixels for the skybox).
XNA's EDRAM handling is a great example of a leaky abstraction. Only having a 10 MB EDRAM buffer does make render target management trickier, but in Microsoft's attempt to completely hide it from XNA programmers, I think they've just made things more frustrating. The concept of a limited buffer for render targets is not that hard to get your head around. You have to understand EDRAM anyway since techniques in XNA that work perfectly on Windows (like what I was doing) will break on the 360. Even worse, you have no real good idea *why* it's breaking unless you understand the limitations of EDRAM and take a guess at what Microsoft is doing under the hood. So what is really being saved here? Just let me deal with EDRAM myself.
Tuesday, December 30, 2008
XNA Studio 3.0: First impressions
I've been doing some graphics prototyping in my spare time lately, and I decided to see if there was a better way to go about it.
I've used RenderMonkey in the past, and while it certainly has its uses, ultimately it left me dissapointed. For a straightforward shader it's fine, but when you start getting into more complicated techniques it starts to break down. After clicking on a billion buttons to get my render targets and passes set up the way I needed I really wished I could just write some code. The lack of any ability to compute anything on the CPU side is what really frustrated me. What I ended up doing was computing a lot of things shader-side that in a real application would be done on the CPU, and it needlessly complicated the shaders.
Another avenue I've pursued is using the DirectX sample framework or an OpenGL sample, and working from there. Even in these simplified environments I find you end up doing a lot more bookkeeping then actual code. Additionally, OpenGL seems very unstable on my laptop -- even vanilla samples are crashing in the Nvidia DLLs.
The last couple days I've been playing around with Microsoft's XNA Studio 3.0 as a graphics prototyping tool, and my impression so far is very favorable. The API is pretty straightforward, and for the most part the abstractions seem in the right place. Porting the current thing I'm working on from C++ to C# took no time at all, and so far I've spent much more time writing meaningful code rather than working on scaffolding.
The GameComponent architecture they have is interesting -- for example, I found myself needing an orbit camera. Rather than write one myself, I just grabbed a component someone else has written. It was one of those few times where the code just dropped in. The only drawback is they haven't implemented taking input from the 360 controller, but that's easy enough for me to write and a lot less involved than doing the whole thing.
I was also surprised how easy it was to get my little project up and running on my Xbox 360. I didn't have to do any code changes, and it all Just Works. The debugging is solid but experienced Xbox 360 developers will miss all the nice tools you get with the real SDK. I'd be really nice if Microsoft released a lightweight version of PIX for XNA that worked with the 360, but I guess you can't have everything.
There have been some minor annoyances. Some of the C# syntax for dealing with vertex arrays can be cumbersome -- new'ing a Vector3 never feels right to me. You also lose access to some hardware features. For example, for some reason it thinks I can't create a floating point depth buffer on my laptop, when I'm very certain the GPU I have can handle that.
On the 360 side, they simplify a lot of the hardware details, but this has limitations. I can't find any way to resolve the depth buffer on the 360 to a texture in XNA. While these limitations are understandable given the intended audience of XNA, it is somewhat annoying.
All in all, I think it is a pretty good framework so far. I don't think you are going to max out the hardware with it, but for a large category of games it will work really well.
Subscribe to:
Comments (Atom)