Search this blog

30 July, 2011

If you're asking for crazy overtime, apologize first

Here I'm going to use Team Bondi's emails as they appear in an article on eurogamer. But if you're been working in this industry for a while chances are that you've seen something very similar at least once. I'm using L.A.Noire developers as an example just because I found these emails todays but we all know excessive overtime is common, other recent examples that went public are the story of Kaos Studios, the letter from the spouses of Rockstar San Diego and of course the one that "started" them all, the EA_Spouse blog.

Extract from an email to the team:
[...]This is an amazing result for 4 hard years and I'm proud of what we've achieved this far. The game is huge in size and scope and will be a real breakthrough. We have almost re-invented the adventure game whilst including the action elements that people expect in a modern game. Its these action elements that we really need to tighten up. 
That said, anyone who has worked on a game or film before knows that to make a AAA title is going to take a big push at the end to get it complete.  This is not uncommon within our industry and while it's not ideal, it is what we need to do to get a polished result to the standard of the competition. To achieve this result we're introducing two new working practices, effective immediately:[...] The hours on Saturday will be compensated through the weekend working scheme, giving everyone the opportunity to take payment at the end of the project, or an extended holiday period. As I said this isn't ideal, but it is typical of what it takes to get a game finished.

Another example:
[...]That means that everyone is required to keep going until the milestone ships or your lead informs you that you have done all that you can for N10 and sub-alpha. Specifically this means in the last two weeks of the milestone you can expect pretty long days. It's "one in all in" until we get the Milestone shipped and get the game ready for testing. We need teamwork to get the game finished to  the quality that we are after and that means being here to help a tester, a designer, an artist or programmer who needs your support to get their work finished. You are not required to work round the clock everyday up until the milestone ships but for the next six months we will need more from you than we ever have asked from you in the past. That's the nature of getting a AAA title out the door and into the hands of the playing public. Getting a result with this game means that the public finally get to enjoy the fruits of your hard work. It also means that we get to take a good break later this year and come back refreshed to work on some exciting new ideas for future projects.[...]

Again:
[...]To complete the project at this time, we require an extension of the ordinary hours of duty and we are asking people to give more hours. Putting a product to market of this size, scale and quality is going to require extra effort from everyone and while we are asking for it, and not saying it's easy, the company is perfectly happy to be flexible of commitments you have outside of the organization[...]

So there is a pattern to this. All these emails basically say:
  • We're making a great product
  • We should all be very proud, we're all on the same boat, let's do this
  • We need you to work harder
  • It sucks but this is how AAA games are made.
Now of course in many cases, managers who say this don't actually believe it's true. They know the project is not going in the right direction and they try keep the morale up by minimizing the problems (it's the way AAA games are made) and trying to gather the last energies of the team around the positive aspects of the product (this is a great thing, you've done an amazing job).

Now, while this can resonate well with some employees, unfortunately it's far from ideal for everyone. Remember that in a game company your audience is very varied, and when asking for sacrifices you want to motivate, but also not to piss anyone, especially not to piss your best talents! The problem here is that with this kind of communication you leave the more inquisitive types wondering if this is just a mediocre, stereotyped way of keeping the morale up or if the company does really lack the vision and the ability to make AAA titles in general.

There are plenty of studies about overtime and its risks I won't get into that discussion  so let's even assume that you're not an idiot and you're not pushing your employees past the limit of diminishing returns thus actually hurting the project by asking to work more.

You need overtime, you believe that it's going to be positive and you need to communicate this decision to the team. You have to remember that you're asking for a sacrifice, even if you're in one of the few countries or companies in the industry that pay for it. You don't own your employees, and surely you don't want to lose your smarter talents

One of the worst things to say to your inquisitive audience (especially I might say, engineers) is that "it's not uncommon" for AAA games to require such rough stretches, because you're dealing with people that will not simply accept that fact (that is indeed, and sadly, true today), but ask themselves a few follow-up questions faster than a journalist.
What else is not uncommon in this industry? Companies that do not generate revenue? That fail to meet their targerts? Massive layoffs and studio closures? And even if crunch is common, how does that relate to good products? Are the best studios, the ones generating most revenue and great games, using cruch? Maybe I should polish my resumee, I hear that [California, Uk, Canada, Germany...] is wonderful this time of the year...

The fact is, we as an industry have to grow a lot. Crunch is common, but it's a mistake, and many people know this, especially some of the more experienced talents will know that's not the way you make great games. 
Personally I've seen great games with zero overtime, bad ones with lots of overtime but also great games done with overtime but more importantly, bad games that made money and great games that didn't... We still have a lot to learn, as an industry, on how to make good games and how to make them in time and how to make money with them...
Every time you ask for crunch, you as a manager have failed. It can happen and it might be something that you'll need to ask for, but at the very least, you should apologize.

Let me suggest a different, more humble and realistic, communication style:
  • This project was hard, we had to face many hurdles. We should have been better prepared.
  • Despite the good work done by the team, we're not were we predicted we would be at this time.
  • We are apologies for not being able to create a schedule that avoided this, and we will definitely do better next time, we have learned some lessons from this.
  • We believe in the product, and this studio needs to deliver a great game in order to go forward.
  • Let's all help ship this product and make it a huge success. We'll need to work harder, and we'll try our best to compensate you for this extra effort after the end of the project.
Even better, provide example of what went wrong, of what are you going to do to make things better next time, of why you still strongly believe in the product and you think that this effort will indeed achieve the objective the company has set for it. 
People need to believe that the future will be better, that there are some problems and not everything went as good as you wanted it to be but you know how to fix things. That there is a reason to stay and work with you! Just saying "this is great", "we're making something amazing", "let's go team" can be meaningless if not counterproductive if you present no evidence.

With this in mind, let's see how the first email could have been worded better:
This is an amazing result for 4 hard years and I'm proud of what we've achieved this far! We wanted to create something amazing and accepted the many risks involved in making an innovative product. The game is huge in size and scope and will be a real breakthrough. We have almost re-invented the adventure game whilst including the action elements that people expect in a modern game. 
Unfortunately, even if the product looks good, we failed to account for everything that creating such an ambitious project entailed, we have learned a lot about how to make such a game during these years. Now, as the deadline for this game is coming, we still have some issues in our action elements that we really need to tighten up. 
We know that crunching is bad for everyone and we'll offer some bonus holiday time at the end of the project as a way of partially compensating for this extra effort we're asking you.  The studio needs to do to get a polished result to the standard of the competition in order to grow and move forward. To achieve this result we're introducing two new working practices, effective immediately:[...] The hours on Saturday will be compensated through the weekend working scheme, giving everyone the opportunity to take payment at the end of the project, or an extended holiday period.

18 July, 2011

Everyday Carry

A few weeks ago I was looking around to shop a small utility knife. Went on youtube, searched for some reviews, there are really lots of resources.
In the end, I didn't buy anything, I went to a local shop even but nothing I saw looked like something a normal, decent human person would really carry without shame. And maybe that's for the better, because I spent already enough money in my past collecting fountain pens and cameras, I don't really need another compulsion.

In all this search though, I learned quite a bit about knives and steels and blade shapes. And about the average knife-carrying person. One thing that seems to be popular is to create an "everyday carry" or EDC "system". A collection of things that a person comfortably carries with him everyday.
Now on the internet most of these are about guns (usually a pair, just to be "safe") and knives (a pair there too) and other "tactical" stuff, I guess these persons really need to compensate for something (Intellect? I bet they also carry a very small penis).
But I also found that this EDC stuff is interesting, because when we carry something everyday it really tells a bit about us. It tells who we are and what we need, and who we would like to be and what we think we need (but we don't).
Now I'm fully aware that no one will car, but I started this blog for myself and I still keep it that way, so I thought it would be cool to have a snapshot of this aspect of me at thirty. Here it is, my current EDC :)


  • I usually wear a nice jacket (if the weather permits) and a pair of running shoes. In the picture the jacket is from Armani Collezioni and the shoes are Reebok Realflex.
    • Why? I've always liked jackets and casual look, and I love House :D
  • Eyeglasses. My current red "Harry Potter" looking ones are from Armani and they are photocromic.
    • Why? Because I need them. My eyesight it not bad and I still play sports without glasses. I never tried contacts and I don't care to try them. My father bought me this pair last time I went to Italy. They look a bit silly but I like them, and the photocromic stuff works really well (unlike the early attempts at that tech).
  • A cloth bag.
    • Why? Because under my sink I have too many plastic ones, so I always try not to carry home more of these.
  • An iPad (first gen). 
    • Why? It's the single most important gadget I've ever owned. It replaced the big and heavy mess of printed paper and magazines I used to carry around everyday. I use iAnnotatePDF and Reeder everyday.
  • A Rhodia notebook.
    • Why? Writing on the iPad is a pain and I like scribbling and drawing. I also carry another less fancy notepad from some company, I found it in my mail. The Rhodia is cheaper and better (quality paper) than a Moleskine.
  • Pens and pencils. Currently: a red Sharpie, a red medium point Pilot liquid ink and a Bic pencil that I stole from EA pink Uni KuruToga pencil a very fine red japanese pen I got from my girlfriend, a big Faber-Castell eraser and a Griffin iPad stylus.
    • Why? Because I collect them, and I love writing and drawing. I don't really use them often (as I don't use often the pad) because at home and at work I have more writing instruments and pads and sticky notes and so on. The iPad stylus works decently but it does not make writing on the iPad really enjoyable and even drawing does not feel great. I used to draw a lot on my old winCe phone with a small stylus and a very simple drawing application, somehow the iPad feels worse.
  • A tin box with various cables.
    • Why? The battery on my phone does not last long so I carry its USB cable, I also carry some decent Plantronics phone earbuds that I don't use because I don't really talk much on the phone when I'm around, and at work I have better open can earphones for music and so on.
  • An Axiom seat bag for my bike.
    • Why? It fits my EDC camera well and it was cheap :D
  • A super-cheap ParkTool hex wrench set.
    • Why? You might always need to tune something.
  • My 3DS
    • Why? Because I bought it. Because it had 3d and it was cool. So far I never played with it. The best game there is is Zelda and I hate it, I hate Nintendo for making a cheap port and cashing all the money with zero effort. I'll probably sell this sucker.
  • A Lacie USB key, 64gb. Similar to this one.
    • Why? I always need to transfer stuff around and I like to carry some programs and pictures always with me. This USB key is terribly slow so I recently purchased a small pocket harddrive for when I need to transfer bigger amounts of data. The nice thing is that it's all metal, it does not have an usb connector soldered to a board that can easily bent (happens to me all the time with conventional usb keys)
  • My "small" digital camera. A Panasonic G1 with a Leica 20mm 1.7.
    • Why? I love photography and this camera is really great (especially paired with that lens, with the standard zoom one is half as good). I have an adapter to mount all my older Leica lenses with it. Unfortunately it's too big to fit my bag so I carry it with my bike, which is bad as really the best camera is the one you always have with you!
  • A Samsung Galaxy S t959 Android phone.
    • Why? I don't really mind phones much. I used to when I was a teen but when I relocated to Canada I just bought a very cheap (40$) prepaid one. The problem with that is that prepaid plans are stupidly expensive here, so when Wind came with an incredibly cheap $40 all-unlimited plan I switched. And now I had unlimited internet, so I needed a decent phone to use that. This is a T-Mobile (US) phone that works with the frequencies Wind use, and I found that Android is really better for phones than iOS. I had to work on it quite a bit but now I really like it!
  • A Leatherman Squirt P4 pocket tool
    • Why? It's very cool, with a small knife and pliers it's really the most useful tool of that size I've found.
  • A Tucano bag.
    • Why? After I bought my iPad I started carrying way less stuff with me (magazines, papers...) so I bought this bag to replace the small backpack I used to carry before!
  • And of course my wallet, home and bike keys.

14 July, 2011

Querying PDBs

We're in the final stages of our game and this means that almost everyone is chasing and fixing crashes, often debugging retail builds from core dumps, having to deal with nasty problems most times without much aid from the debugger.

Sometimes you're just looking at the memory, trying to identify structures: executable regions, virtual tables, floats and so on. From there you might hope to recover the type of the variable you're looking in memory and today we got an email from people trying to do exactly that, chasing a structure from some sparse hints.

So I thought, how cool would it be if we could execute queries on the debug symbols to find such things! 
Well it turns out it's really, really easy. One great tool that does something similar is SymbolSort, and it's written in C#, and it comes with sourcecode! Cool!

SymbolSort queries the PDB for global data symbols, here we are interested in global user defined types and their subtypes, it's a pretty similar thing. Also, Microsoft provides a Debugging Interface wrapped in a COM dll that does pretty much all you need, and it's trivial to call from C# or similar.

Of course debugging is only a small fraction of what you can do with PDBs, so this is really just an example to show how easy it is, from here you can do many nifty things like code-generating reflection, cross-referencing with profile captures to do coverage analysis or serialization modules and so on.


Disclaimer: I wrote this in half an hour. It's probably wrong and surely ugly. Play with it but don't trust it! It's just meant as an example of how easy it is to query PDBs via msdia.
In fact the test program I did is a bit more complex and complete than the one I posted here, that is a stripped down version that fits the blog better and IMHO is a better starting point. Also if you really plan to chase structures with this, keep in mind that this version does not recursively search into member structures and inherited stuff.


P.S. It turned out that this particular bug was caused by bad memory (the actual memory in the hardware - it happens quite often) so this exercise was ultimately useless :)

using System;
using System.Collections.Generic;
using Dia2Lib; // we need a reference to msdia90.dll in the project

namespace Test
{
    class Program
    {
        private static void GetSymbols(IDiaSymbol root, List< IDiaSymbol> symbols, SymTagEnum symTag)
        {
            {
                IDiaEnumSymbols enumSymbols;
                root.findChildren(symTag, null, 0, out enumSymbols);

                uint numSymbols = (uint)enumSymbols.count;

                for (;;)
                {
                    uint numFetched = 1;
                    IDiaSymbol diaSymbol;
                    enumSymbols.Next(numFetched, out diaSymbol, out numFetched);
                    if (diaSymbol == null || numFetched <  1)
                        break;

                    symbols.Add(diaSymbol);
                }
            }
        }

        private static bool IsMemberPointer(IDiaSymbol s) // Quick'n'dirty based on observation of 1 (one) pointer, I'm sure there are better ways
        {
            return ((s.type != null) &&
                    (s.type.name == null) &&
                    (s.type.type != null) &&
                    (s.type.type.name != null)
                );
        }

        private static bool IsMemberPrimitive(IDiaSymbol s, ulong length) // I'm not entirely sure about this one too :)
        {
            return ((s.type == null) && s.length == length);
        }

        private static bool SymbolPredicate(IDiaSymbol s)
        {   // see the IDiaSymbol documentation here: http://msdn.microsoft.com/en-us/library/w0edf0x4.aspx

            // It's around this size...
            if (!((s.length > 62) && (s.length <  67)))
                return false;

            List< IDiaSymbol> childSymbols = new List< IDiaSymbol>(); // Note: from what I've seen the symbols are arranges in the order they appear in the class/structure
            GetSymbols(s, childSymbols, Dia2Lib.SymTagEnum.SymTagData); // TagData will get us all the member variables, TagNull will get us everything

            // It has to have sub-symbols (fields)
            if (childSymbols.Count == 0)
                return false;

            // One has to be a matrix
            bool hasMatrix = false;
            foreach (IDiaSymbol subS in childSymbols)
                if ((subS.offset <  8) && // It should be one of the first members in memory
                    (subS.type != null) && // It's not a primitive type, so its type has to be a symbol
                    (subS.type.name != null) && 
                    (subS.type.name.ToLower().Contains("matrix4"))                    
                )
                    hasMatrix = true;
            if (!hasMatrix) return false;

            // Another one is a pointer to a matrix...
            bool hasPointer = false;
            for (Int32 i = 0; i <  childSymbols.Count; i++)
            {
                IDiaSymbol subS = childSymbols[i];
                if (IsMemberPointer(subS)
                    && (subS.type.type.name.ToLower().Contains("matrix4"))
                )
                {   //...followed by a 4 bytes integer
                    if (i <  childSymbols.Count - 2)
                        if (IsMemberPrimitive(childSymbols[i + 1], 4))
                            hasPointer = true;
                }
            }
            if (!hasPointer) return false;

            return true;
        }

        static void Main(string[] args)
        {
            DiaSourceClass diaSource = new DiaSourceClass();

            diaSource.loadDataFromPdb("game360_release.pdb");
            //diaSource.loadDataForExe(filename, searchPath, null);

            IDiaSession diaSession;
            diaSource.openSession(out diaSession);

            IDiaSymbol globalScope = diaSession.globalScope;

            List< IDiaSymbol> globalSymbols = new List< IDiaSymbol>();
            GetSymbols(globalScope, globalSymbols, Dia2Lib.SymTagEnum.SymTagUDT /* user defined type! */);

            List< IDiaSymbol> matchingSymbols = globalSymbols.FindAll(SymbolPredicate);

            foreach (IDiaSymbol s in matchingSymbols)
            {
                if(s.name!=null)
                    System.Console.WriteLine(s.name);
            }
        }
    }
}

09 July, 2011

In-game image calibration

Goal
To provide a pleasant gaming experience to the widest audience possible with the least amount of user tinkering! sRGB (or whatever else colour space you're authoring your game in) calibration is not our goal here. 

While it's highly advisable that your artists do have professionally calibrated monitors and TV sets to be able to reason about colour in a consistent way, but most users won't. 

Honestly this is hardly surprising in a industry that does not seem to care much about color. I've yet to see a game studio that does color "right". Some do care about calibrating desktops and having some TV sets calibrated somehow, at least the ones used for important reviews. 
But we're still far from the standards used in video production (enforcement of calibration across all pipelines, control ambient lighting etc...) and while we recently learned that doing our math in a somewhat linear space is more realistic than working in gamma, we really don't reason about color spaces much, and we still author everything in the default 8-bit sRGB (which sucks as it's way too narrow for authoring and it does not get us the flexibility needed to change our mind later on, if needed).
Controls
There are three main parts we can control when it comes to image display. The image itself (our rendering), the video card output (usually in the form of a gamma curve or a lookup table) and the television set (brightness, contrast...).

Note that it's the entire imaging pipeline that matters. You can't calibrate a TV to any colour space without considering the device that will provide the image signal to the TV set.

In other words, a TV calibrated to sRGB on a ps3 might very well be not be calibrated to the same space on 360, as these two might very well convert the framebuffer numerical values to different output voltages (i.e. by default 360 assumes a gamma 2.2 framebuffer sent to a gamma 2.5 tv and performs a conversion accordingly, ps3 does not apply a gamma curve by default to the framebuffer output).

At the very least we should make sure that the output of all our platforms does match each other, and apply a baseline correction in the hardware (gamma curve or gamma lookup, whatever you have on the graphics card you're using) if that's not the case.

Ideally, we should have our workstations calibrated to sRGB (or whatever color space we want) and use R709 calibrated displays (which are output-device independent, as they are the standard used in HDMI and in general HDTV media, conveniently also sharing the same primary chromaticities with sRGB) to make sure our framebuffers are interpreted as the same space (that's to say, that the numbers in them, when viewed on a R709 device, do correspond to the colors in the space we want to use), or apply a conversion curve in the hardware if not. But this is another story...
Our enemy
So we have options. Now what? There are plenty of things we can do with our image, from the simple linear brightness contrast (a multiply-add, what photoshop does if you use the "use legacy" brightness-contrast) to better non-linear ones (gamma and curves) to more exothic variants (i.e. colour temperature, rendering exposure, changing your tone mapping operators...).

Well, first of all we have to identify our enemy! For our game to be comfortably playable we want at least to be sure that our image will be always well legible, that's to say that the user can distinguish detail in shadow and brightly lit areas well (with the former being usually more important than the latter).

Dynamic range, and not gamma, is arguably the most import important aspect in TV calibration! That does not mean that changing gamma is not an option but that what our priority should be the tuning onf dynamic range, not achieving fidelity to a given gamma curve (i.e. the sRGB 2.2-ish one).

Broadcast-safe colors
Black and white levels are the grayscale intensities at which the screen outputs respectively no light and full intensity, the two extremes of the dynamic range. That's simple. 

The problem is how to produce these levels, which signal we need to emit for a given TV to achieve a full black. In theory, you write a zero in the framebuffer and you get out full black. You write something different than zero, and you get something brighter than black out, and you keep being able to distinguish all levels up to full intensity.

In practice, that rarely happens. The theory is a complex mess. Most video standards define a sort of "broadcast safe" range, that's to say, they are designed not to use all of the input range (either voltage or bits) for the entire dynamic range, but allow under and over-shoot areas that should not be used.

Note that this does NOT mean that you should not have these values in your framebuffer, the conversion should be done in the video card as it's part of the output standard.

Different video standards have different black levels: i.e. analog PAL defines its black level at zero IRE, while NTSC puts the same at 7.5 IRE and even digital standards are not so easy, with DV and DVD YCrCb setting black-white at 16-235 (64-940 in 10 bits formats) respectively and HDMI supports both (limited versus extended range, at least in RGB mode, for YCrCb only limited is allowed).

Of course all this would be irrelevant if all devices were well behaved and calibrated to the standards. In practice, that's not true, so we have to find a way to be inside our TV dynamic range.

How to measure
We can't use measuring devices so we have to devise some sort of calibration procedure that can be reliably executed by most people.

Most games nowadays just prompt the use with a dark logo on a dark background, asking to make the former "barely visible", or asking to have parts of it barely visible while others should almost disappear in the black background.

The intent there is to be sure that after a given gray level our shadow details will be readable (usually, around 15/255 in framebuffer values) while keeping the overall luminosity not too high, stretching the gamma curve more that it's intended (that's why commonly a second gray value is included, around 6/255, that should be not readable).

It's a decent way to very crudely measure gamma but I argue it's not really the best solution, at least not used as the only calibration aid. 

Gamma is not as crucial as dynamic range and from my experiments some TVs especially if set to extreme contrast values (which are far too common even in newer models, as these screen are supposed to "pop" when displayed in a retail store) are no matter what capable of being calibrated with such patterns, and it can be difficult for the use to realize what's wrong.

An easier method is to simply display a small number of bars around the end of the dynamic range, asking the user to tweak controls so that they are able to distinguish all of them from each other, or at least most of them.

It is possible to calibrate gamma also using various patterns. These methods are popular for LCD monitors but I won't really try to use them on TVs. They are not so easy to explain and follow and they are not that trivial to author (note: vertical lines should always be used, checkerboard patterns or horizontal lines won't work reliably especially on CRTs) and they are not so reliable (as TVs can do more processing than you would like to such patterns, i.e. sharpening them).

What to correct
Unfortunately, you can't really hope to achieve a good image without tweaking the television set. Or at least, you can't hope of achieving that for a wide range of users, as televisions with ugly behaviours or ugly defaults are way too common.

One could think that we do have a lot of control in our rendering pipeline, as we can literally apply any transformation on our values. Unfortunately, even if we do have a lot of flexibility, we don't have much precision, and thus we can't really tweak the image much without creating artifacts.

Tweaking the framebuffer output is should be our last resort. We typically have only eight bits of resolution there, so any transformation will result in worsening colour precision and can result in banding.

DirectX support a 10-bit gamma lookup, so we have a better range there, but not that much. If you don't have that luxury, then you might need to touch the framebuffer, so you'll have even less "space" for corrections.

Having the user tweak only the TV setting though is not ideal either. Some users simply won't. And many older TV sets are not capable of achieving a decent shadow definition even twearking their controls.

A good calibration screen...
It's an art of balacing and in the end depends on your user base (PC and PC monitors for example are very different from TV sets, and their user base is on average more willing to tinker) and on the requirements of your game. A sport game in broad daylight might work well over a much larger set of conditions than a dark horror FPS. Some games can require more strict objective calibration to be enjoyable while others might want to provide settings for the user to adjust the image to their own liking and so on.

In general you shoud:

  • At least have a brightness (or gamma) setting to control the hardware gamma curve. That will make the game usable on most TV sets. Avoid linear brightness.
  • Present a "dynamic range" adjustment pattern, and ask to tweak the display setting in order to maximize the usable range.
  • Use clear, easy instructions. Usually if the whites are not distinguishable the user should tweak the contrast setting, while if the darks are not visible, brightness should be adjusted.
  • Please, use animated, blinking patterns (i.e. see the THX one here)! They are way easier to read than static ones, especially for dark or dark images.
  • Make your patterns big enough to be visible and to make sure they are not influenced by the latency of the electron beam on CRTs.
  • Present your test screen against a simple black or middle gray background, to avoid problems with dynamic adjustment algorithms that might be present in modern televisions. This is especially important when trying to adjust the TV set to produce a good dynamic range. Game controls can be either set using test patterns on a neutral background or be set more subjectively to the user's liking, in that case it would be best to present them overlaid to the game rendering and tunable at any moment.
  • At least provide a gamma adjustment setting. This will pretty much act as a brightness control, and its non-linear nature is in practice way more useful than a linear multiply-add control.
  • Optionally, provide a contrast setting. Non linear s-curves are preferable.
  • I'd also strongly advise to instruct the user to pay attention to the sharpening and denoising setting, as on TVs it almost always do a terrible job with these cranked up. Some games need this more than others depending on their textures, art style and amount of alias. For sharpness a good idea is to display black lines or text on a middle gray background and ask the user to set the sharpness so the lines do not appear jaggy or having white halos around them. For denoising, something with delicate texturing should be presented, asking to change the settings until no smearing is visible. It's a good idea to present reference, exaggerated images of "bad settings".

07 July, 2011

Little tool for debugging with Processing

Ok, this is very rough. But it's fun to play with and it won't take much effort to turn into a real application. The idea is to use processing to import data from Pix or dumped from your game about your framebuffer, and reconstruct a point cloud from it (we need the depth and some camera parameters...). 


Having a point cloud makes easy to navigate in the scene, but it also makes possible to visualize data in a convenient way, i.e. the sampling pattern of a SSAO effect or display lines for our normals and so on.





This is a stupid test I did in half an hour with Processing using Java in Eclipse, as described here: http://c0de517e.blogspot.com/2011/05/edit-and-continue-is-fun.html. I plan to expand this in a functional "inspector", capable of displaying values from the game but also, by saving images layers with different variables, debugging visually problems in the shaders (Pix) and in memory (a VS plugin? maybe I'm going too far).


Here is the source:

import processing.core.PApplet;
import processing.core.PImage;
import processing.core.PVector;


public class FrameBufferDebugger extends PApplet 
{
PVector deproject(int x, int y, float depth)
{
// Deprojecting from z-buffer to view-space, this is specific to a given renderer  
float deprojectX = 0.28739804f;
float deprojectY = 0.161661401f;
float deprojectZ = -1.0002501f;
float deprojectW = 0.500125051f;

float viewDepth = deprojectW / (1.f - depth + deprojectZ);
float xf = ((float)x/(float)cloudWidth) * 2.f - 1.f;
float yf = ((float)y/(float)cloudHeight) * 2.f - 1.f;
xf = xf*deprojectX*viewDepth;
yf = -yf*deprojectY*viewDepth;

return new PVector(xf, yf, -viewDepth);
}

// Out point cloud
int cloudWidth;
int cloudHeight;
PVector cloud[];
int cloudAttrib0[];
PImage cloudAttrib1;

PVector boundingMin, boundingMax;

    public void setup()
    {
        size(800,600,P3D);
        
        // Simple parser for a x,y,depth,stencil CSV file (generated by Xbox Pix)
        String lines[] = loadStrings("depth.csv");
        String[] firstLine = splitTokens(lines[lines.length-1], ", ");
        cloudWidth = Integer.parseInt(firstLine[0])+1;
        cloudHeight = Integer.parseInt(firstLine[1])+1;
        
        boundingMin = new PVector(Float.MAX_VALUE,Float.MAX_VALUE,Float.MAX_VALUE);
        boundingMax = new PVector(-Float.MAX_VALUE,-Float.MAX_VALUE,-Float.MAX_VALUE);
        
        cloud = new PVector[cloudWidth*cloudHeight];
        cloudAttrib0 = new int[cloudWidth*cloudHeight];
        for(int i=1; i < lines.length; i++)
        {
        String[] splitLine = splitTokens(lines[i], ", ");
            int x = Integer.parseInt(splitLine[0]);
            int y = Integer.parseInt(splitLine[1]);
            float d = Float.parseFloat(splitLine[2]);
            int offset = y*cloudWidth+x;
            cloudAttrib0[offset] = Integer.parseInt(splitLine[3]);
            if(d!=0.f)
            {
            PVector pnt = deproject(x, y, d);
            cloud[offset] = pnt;
            
            boundingMin.x = Math.min(boundingMin.x, pnt.x); boundingMax.x = Math.max(boundingMax.x, pnt.x);
            boundingMin.y = Math.min(boundingMin.y, pnt.y); boundingMax.y = Math.max(boundingMax.y, pnt.y);
            boundingMin.z = Math.min(boundingMin.z, pnt.z); boundingMax.z = Math.max(boundingMax.z, pnt.z);
            }
        }
        
        cloudAttrib1 = loadImage("normals.jpg"); // It's not a smart idea to use a jpg here...        
    }
    
    float cam1 = 3.14f;
    float cam2 = 0.f;
    float camR = 0.f;
    PVector cameraCenter = new PVector();
    
    public void mouseDragged()
    { // Yeah... this "camera" sucks
    if(mouseButton == LEFT)
    {
    cam1 += (pmouseX-mouseX)*0.01f;
    cam2 += (pmouseY-mouseY)*0.01f;
    }
    if(mouseButton == RIGHT)
    {
    cameraCenter.x += (pmouseX-mouseX)*0.1f;
    cameraCenter.y += (pmouseY-mouseY)*0.1f;
    }
    if(mouseButton == CENTER)
    {
    float disp = ((pmouseX-mouseX)+(pmouseY-mouseY))*0.1f; 
    cameraCenter.z += disp;
    }
    }
    
    public void draw()
    {
    background(255);


    PVector boundingCenter = PVector.mult(PVector.add(boundingMax,boundingMin),0.5f);
        PVector boundingExtent = PVector.sub(boundingMax,boundingMin);
   
    if(camR == 0.f) 
    {
    camR = 10.f;
    cameraCenter = new PVector(boundingCenter.x, boundingCenter.y, boundingMin.z);
    }
   
    camera(
    cameraCenter.x + (float)(Math.sin(cam1)*Math.cos(cam2)*camR),
    cameraCenter.y + (float)(Math.sin(cam1)*Math.sin(cam2)*camR),
    cameraCenter.z + (float)(Math.cos(cam1)*camR), 
    cameraCenter.x, cameraCenter.y, cameraCenter.z,
    0,1,0
    );


    pushMatrix();
        translate(cameraCenter.x,cameraCenter.y,cameraCenter.z);
        fill(128,0,0,255);
        box(0.1f);
        popMatrix();
   
        for(int x=0; x < cloudWidth; x+=4)
        for(int y=0; y < cloudHeight; y+=4)
        {
        int offset = y*cloudWidth+x;
        if(cloud[offset] != null)
        {
        //stroke(cloudAttrib0[offset]);
        stroke(cloudAttrib1.get(x,y));
        point(cloud[offset].x, cloud[offset].y, cloud[offset].z);
        }
        }
        
        noStroke();
        fill(255,0,0,32);
    }
}