Archive for the ‘Gamedev’ Category

XNA as a … benchmarking tool?

Wednesday, August 30th, 2006

Now I think of it, the upcoming release of XNA Studio Express is the first opportunity for the public to run the exact same code (well, almost, given that there’s an invisible runtime beneath your code, which is NOT the same) on a retail Xbox 360 and a PC. This means you can get a fairly accurate comparison of low-level capabilities of Xenos vs. your own PC GPU - it should be easy to devise test which stress only shader power, or only alphablending bandwidth, or only geometry througput, while taking the CPU out of the equation. These results would be useful for all PC developers planning to eventually target the 360. I vaguely remember something about the EULA for one of the previous .NET frameworks explicitly forbidding publishing benchmark results, but with the XNA Studio in open circulation and something like 10 milion 360s in the wild I don’t think such results could be kept under wraps.

It should also be possible to benchmark the CPU running MSIL, but that would only be of use if you’re actually planning to make your game in C#/XNA framework, which, while probably a reasonable plan for simpler XBLA titles, I don’t expect to become popular with full games.

Update:В Well, it seems the EULA for XNA expressly forbids disclosing benchmark results without Microsoft’s consent. You won’t see me doing this, I promise ;-)

LOD bias’ raison d’etre

Thursday, February 16th, 2006

Mipmap LOD bias is usually used for two things: one, by smartass developers, to artificially boost apparent texture sharpness, simultaneously producing hideous aliasing and thrashing the texture caches. Two, by smartass driver writers who use it as a cheap way to improve performance. I’ve seen our games’ textures reduced to 16×16 smudges on a laptop’s 9600 switching to “performance mode” when running on batteries, presumably to conserve power - it was a disaster, and cost me a good chunk of debugging time until I figured it out.

Today I’ve found a legitimate use of the mipmap LOD bias. I’m implementing very-high-resolution screenshots (as in “print quality resolution”, or in “fscking too much for the GPU resolution”) using the method outlined by Steve Rabin in Game Programming Gems 4: I render and store NxN resolution screenshots, offset by M/N of the screen pixel resolution in each dimension by tweaking the matrices, then interleave them, taking one pixel of each input image in every NxN block of the output image. So if you have an object which is 5 pixels tall in the original resolution, you’ll have it 5xN pixels tall in the output image.

The problem is that the GPU has picked mipmaps for the object as if it was 5 pixels tall, which become too blurry when the object suddenly becomes 5xN pixels; hence, a -log2(N) bias is required to restore its normal look. Direct3D seems to allow negative bias up to -3.0, so you can effectively use this technique up to 8×8 the screen resolution; if you use a reasonable 1600×1200 as your screen resolution and assume a print quality of 300 dpi, this means you can get reasonable prints up to 40″x30″ (100 cm x 75 cm for metrophiles). Of course, even if you upsample 16×16, and give a negative LOD bias of “only” 3.0, I doubt anyone will be able to tell the “wrong” minimaps on a 300 dpi print :-)

At these resolutions, a 32-bit OS starts to show its limits: the 1600×1200 image, upsampled 8×8 is 350 MB; if you need 16×16, this becomes 1.4 GB, and might be somewhat problematic to fit in the 2 GB address space provided to a 32-bit application by Windows.

Buck a point

Friday, February 10th, 2006

I always wondered why on Earth games typically have “inflated” scoring systems: the tiniest thing you can do is awarded not with 1 point, but with, e.g. 100. Here’s a resonable theory by Kim Pallister:

Kim’s First Rule of Casual Game Design: Points awarded should correlate to real-world currency and the reward for an accomplishment should correlate to the value to the user.

OK, so what does this mean?
First off, when I say “first rule” I don’t mean most important. I just mean it’s the first I’ve come up with :-)
Secondly, when I say correlate to real-world currency, I mean the basic unit. E.g. 1 point = $1.
Finally, when I say value to user, I mean that you should think about things like “if these points were dollars, how would I feel about accomplishing that act and winning that amount?”

The Making of SOTC

Thursday, February 2nd, 2006

Finally, a some kind soul translated the lecture on The Making of “Shadow Of The Colossus” for us round-eyed barbarians.

Where is my NVPerfHUD64?

Wednesday, February 1st, 2006

For almost two months now developing exclusively on Windows XP 64-bit edition, the first major missing/incompatible piece of software: the NVIDIA instrumented drivers needed by NVPerfHUD. Please, NVIDIA!

Stop burrying Moore’s Law, please

Thursday, January 19th, 2006

Here’s something on the supposed end of Moore’s Law from a guy who knows what he’s talking about (formerly at Intel, now at Microsoft):

Moore’s law was never about Mhz or Ghz. It was about the number of transistors on a piece of silicon doubling every 18 months. The Mhz just came for free along with that. Though the Mhz train may have come off the track so to speak, the number of transistors keeps on increasing. Yes, it’s via more cores, and I’ll get to that, but the point is, it’s still increasing.

The evolution of game data

Tuesday, December 6th, 2005

In the neverending battle for more configuration and less compilation, game data goes up a seemingly infinite ladder of complexity:

  • Int constant in the C++ source
  • Int constant in a config file
  • String constant in a config file
  • A script returning a string value, in a config file, evaluated to produce the string value

A good scripting language should collapse all of this into a single form: a single line of text somewhere saying “Gaul.Swordsman.MaxHealth = 80″. You wouldn’t write it in C++, because a good scripting language should never require you to write game code in C++. A good scripting language should be a good config-file language. A good scripting language should make the difference between strings and ints trivial, if not nonexistent. And finally, a good scripting language should make it possible to write “Gaul.Swordsman.MaxHealth = Gaul.BaseHealth * 0.8″ without too much trouble (e.g. explicitly including “GaulBaseParams.h” everywhere).

Subversion for Game Development

Friday, December 2nd, 2005

There are two things I need from Subversion to make it the perfect (for the price) versioning tool for huge art assets: wire compression for svnserve and getting rid of the pristine copies. The first sounds simple enough, but is dismissed by the Subversion developers with “just use Apache”; the second request elicits mumbles about refactoring the working copy code, mentioning a three-digit issue number (meaning very old, entrenched issue, which nobody wants to tackle) and not very assuring reminding of several people who have tried to do this and disappeared without a trace.

Paul Cezanne’s Still Life: 7/10

Friday, November 11th, 2005

Tea Leaves: Dancing About Architecture:

Imagine if art critics reviewed paintings the way game critics review games:

Still Life with Plate of Cherries
Paul Cezanne’s Still Life with Plate of Cherries is a clear evolution from his previous outing in the Still Life series, Apples, Peaches, Pears, and Grapes. The area of the canvas has more than doubled since the previous generation, allowing much finer detail to be discerned. Furthermore, the improved technique used by Cezanne now allows him to depict many more objects on the canvas simultaneously. Unfortunately, Cezanne’s rendering technique still leaves a lot to be desired: in particular, the draw distance in his works tends to leave the backgrounds overly blurry, compared to some of the recent releases by Matisse, who somehow manages to squeeze tack-sharp imaging out of the same pigments.

Spec#

Friday, November 11th, 2005

Right on the heels of Herb Sutter’s talk about Concur, here’s another interesting project from Microsoft Research: Spec#, a language + static analyzer + runtime analyzer which allows you to explicitly declare invariants, pre- and post-conditions in your code, and have them checked as much as possible during compilation, and at run time. Comes with a VS.NET plugin which, Eclipse-style underlines the dubious constructs in the code even as you type.
Again, when do I get the compiler?