29 June, 2008

Fermat's principle

Searching for topics
I've noticed lately that four major topics have found their way through my blog posts. That wasn't something that I did plan, it just happened, like it happens to a person of going through different periods in his music listening habits. Those are:
• C++ is premature optimization
• Shader programming is physical modelling (and artists are fitting algorithms)
• Iteration time is the single most important coding metric
• Huge, data-driven frameworks, are bad
We explore things moving our interests more or less randomly, until we find something that requires a spot, a local mininum in the space of things, were we spend some time and then eventually evade, starting another random walk searching for another minima.

And while that it's a good, smart, simple way to search (see metropolis random walks, that lead naturally to simulated annealing, that in turn is a very simple implementation of the tabu search ideas... a nice application of metropolis-hastings montecarlo methods is this one...), it's probably not a good way to write articles, as the result is non uniform, and I've found that the information that I think it's important is scattered among different posts.

As I'm not happy with the way I explained some concepts, I tend to increase the redundancy of the posts, they become longer, some ideas are repeated more times in slightly different perspectives, hoping that the real point I wanted to make eventually is perceived.
That eventually helps myself to have a clearer view of those ideas, I'm not pretending to write this blog as a collection of articles for others to read, I write on the things that I'm interested as writing helps me first and foremost, and then if someone else finds that interesting, it's just an added bonus.

Be water my friend
One of the things that I still don't feel to have clearly espressed is my view of data-driven designs.
Let's look at the last two items of my recurring-topics list: "Iteration time is the single most important coding metric", "Huge, parametric frameworks, are bad". The problem here is that most of the times, huge, parametric frameworks are made exactly to cut iteration times. You have probably seen them. Big code bases, with complex GUI tools that let you create your game AI / Rendering / Animation / Shading / Sounds / whatever by connecting components in a graph or tree, mhm usually involving quite a bit of XML, usually with some finite state machine and/or some badly written, minimal scripting language too (because no matter what, connecting components turns out not to be enough)

How can they be bad, if they are fulfilling my most important coding metric? There is a contraddiction, isn't it?

Yes, and no. The key point lies in observing how those frameworks are made. They usually don't grow up from generalizations made on an existing codebase. They are not providing common services that your code will use. They are driving the way you code instead. They fix a data format, and force your code to be built around it. To fix a data format, you have to make assumptions on what you will need. Assumptions of the future. Those, always, fail, so sooner or later, someone will need to do something that is not easily represented by the model you imposed.
And it's at that point that things go insane. Coders do their work, no matter what, using something close to the Fermat's principle (the basic principle our-rendering-engineers interpretation of light is built on). They try to solve problems following paths of minimal design change (pain minimization as Yegge would call it). And not because they are lazy (or because the deadlines are too tight, or not only anyways), but most of the times because we prefer (questionabily) uniformity to optimal solutions (that's also why a given programming language usually leads to a given programming style...).
So things evolve in the shape that the system imposes them, requirements change, we change solutions to still be fitting in that shape, until our solutions are so different from the initial design that the code looks like a twisted, bloated, slow pile of crap. At that point, a new framework is built, in the best case. In the worst one, more engineers are employed to manage all that crap, usually producing more crap (because they can't do any better, is the design that is rotten, not only the code!).

A common way to inject flexibilty in a rotten overdesigned framework is employing callbacks (i.e. prerendercallback, postrendercallback, preupdate, postendframe etc, etc etc...) to let users add their own code in the inner workings of the system itself. That creates monsters that have subshapes, built on and hanging from a main shape, something that even Spore is not able to manage.

What is the bottom line? That the more a library is general, the more shapeless it has to be. It should provide common services not shape future development. That's why for example when I talk about data-driven and fast iteration, most of the times I also talk about the virtues of reflection and serialization. Those are common services that can be abstracted and that should find a place in our framework, as it's a very safe assumption to say that our solutions will always have parameters to be managed...
Rigid shapes should be given as late as possible to our code.

Simple example
I still see that many rendering engines built on scenegraphs, and worse, using the same graph for rendering and for coordinate frame updates (hierarchical animations). Why? Probably because a lot of books show such ways of building a 3d engine, or because it maps so easily to a (bloated) OOP design that could be easily be an excercise in a C++ textbook.
Hierarchical animations are not so common anyway, they should not be a first-class item in our framework, that is an unreasonable assumption. They should be one of the possible coordinate frame updating subsystems, they should live in the code space that is as near as possible to user rendering code, not rooted in the structure of the system. Heck, who says that we need a single coordinate frame per object anyway? Instacing in such designs is made with custom objects in the graph that hide their instancing coordinates, making everything a big black box, that becomes even worse if you made the assumption that each object has to have a bounding volume. Then instancing objects will have a single volume encompassing all the instances, but that's suboptimal for the culler so you have to customize it to handle that case, or write a second culler in the instaced object, or split the instances in groups... And you can easily start to see how easily things are starting to go wrong...

25 June, 2008

Quality rule

The quality of any given program (or subsystem) obeys to the following rule (due to Dijkstra):

$Q_{p}=\frac{S_k}{|p|}$

Where Qp is the quality, Sk the skill of the coders, and |p| the size of the system.

The bottom line is: big programs suck. Beauty lies in minimal complexity.

Interestingly the size here is not measured up to an additive constant as in Kolmogorov's complexity, but it's the actual line of code count, so finding the right abstractions, or using the right languages, can help building more expressive systems keeping the same quality.

So this is also the golden rule for discarding generalizations: if an abstraction does not simpfy you code (your current code, in your current project), discard it.

Corollary: generalizations should be made a-posteriori (via refactoring) and never a-priori (via some kind of optimistic design that tries to guess what's going to be useful in the future).

Below a given quality, the system will just fail. So no matter what, there is a size limit for our projects that can't be crossed by human beings. It's intresting to note how this is a quite general rule, before computers, people were computers, and other people (mathematicians) programmed them by trying to analitically optimize as far as was possible. With integrated circuits we moved all our work-force in the "outer" shell, the programming one. Data-driven designs, generic programming, code generation, domain specific languages are all attemps to make a jump towards another meta-level.

P.S. Of course, this is something I've completely made up, I wanted to see how I could embed math in my posts...

24 June, 2008

Normalize(Normal)

Rendering is going through a very illiterate era. Innumerical, really.

We had the software rendering era, triangle filling was hard, ninja-coders knew how to count instruction cycles and wrote routines in assembly. Stupid errors were made due to lack of mathematical knowledge, but they were not so important (even if I do think that knowing maths also helps optimization...)

Graphics was full of cheap hacks anyway. There's nothing too wrong in cheap hacks, as long as they look good, and if you can't do any better. We will always use hacks, because there will always be things that we can't do any better.
Light is simply too complex to get it completelly right. And we don't know too much about it too.

Still, you should know that you're doing a cheap hack. The main point is in knowing your limitations. The only thing that you know about a graphic hack is that you can't know its limits. Because you didn't start with any reasonable model, and so you can't tell which things that model is able to accurately simulate and what not, and more, you can't tell when you simplify a model how much error you have and in which cases you are committing an error.
When you know all that, you're moving from hacking to approximations, that are a way more refined tool.

Today, we have a lot of computing power. Computer graphics is relatively easy. Anyone can display a spinning cube with a few lines of code, in any language. So you would guess that we are dealing less with hacks and more with maths right? We should be more conscious, as we don't have to be concerned anymore with some implementation details like how to draw a triangle (fast enough). Our first optimization should be in the algorithms.

Well, unfortunately, it's not so. Not in the slightest. We actually most of the times managed to forget about the hacks we used to do and just assume that is the way things work. We are in a pop era. And don't get me wrong, I love pop, there are many geniuses in it, but we shouldn't be limited only to it!

We recently discovered that we did not know anything about color. We were producing colors, but we did not know anything about them. And we still don't know. I guess that most rendering engineers just skimmed through the concepts of gamma, looked how they could fix that in the shader or using appropriate samplers and render targets, and hoped really badly that noone will discover any other obvious flaws in their work.

More or less the same is happening with normals now. How many people asked themselves what normals are? What vector operators do on normals? What are we doing? Recently, someone tried to answer those questions. It's not a complete answer, but there are a few nice pictures, and again, everyone seems happy. How many people actually questioned that interpretation of normal vectors that Loviscach did? How many people are using his formulas consciously?

People encode normal data into two-channel textures by storing x/z and y/z, as it's faster to decompress (you just need to normalize(float3(tex2D(sampler,UV).xy,1)) and you're done) and maybe because they saw some pictures demonstrating that the error distribution of this technique is more even across the hemisphere. Who cares that you can't (easily) encode any normal that has an angle wider than 45° from the z axis? Maybe you don't really care, and that error is something you can afford. But still you should know...

You should know that your linear or anisotropic filter in the sampler is going to do an average of your normal data. And averaging does not preserve length, so you will end up with unnormalized normals. Well, that's easy, just normalize them right? Yeah, that's the least you can do. Every operation that denormalizes something, you can fix it with normalize. But what are you doing? Who knows.

Actually you shouldn't care less about normalizing, you could do that only at the end, in your shading formula. Of course you can easily build an algebra of vectors on the two-sphere (unit vectors) by taking your familiar linear algebra operators and appending a normalize to all of them. But what are you doing? We use normals as directions, we should have our operations that are linear on the direction space. If an operation denormalizes our vector, but is leaves it still pointing to the correct direction, it's quite fine! If it leaves it normalized, but on a wrong direction, it is not.

Actually, the only way we can avoid the filtering error, is by encoding angles and not cartesian coordinates.

So don't normalize! Know your operations. Know your errors! Or if you don't, then don't try to be "correct" in an unknown model. Just hack, and judge the final visual quality! Know your ignorance. But be conscious, don't code randomly.

And of course those are only examples. I can make a countless number of them. I've seen a axis-aligned bounding-box class using taking as input Vector4. Well, in fact, some functions were using Vector4, some other Vector3, some other were overloaded for both. What are you trying to say? That the AABB is four-dimensional? I don't think so. That you are handling 3d vectors in an homogeneous space? Maybe, but after checking the code, no, they did not correctly handle homogeneous coordinates... Actually, I will be really surprised to see a math library were Vector4 is actually intended for handling that... Well, I'm just digressing now...

The problem is, unfortunately, that errors in graphics usually won't make your code crash. It will just look fine. Maybe, it will look a litlle bit CG-ish, or you won't be able to replicate the look you were for. Maybe you'll blame your artists. Probably you will be able to see in what cases it does not look good, and add some more parameters to "fix" those errors, until you don't have enough computing power to run everything or your artists go crazy trying to find a value for all the parameters you added. Or you'll wait for more advanced algorithms...
Yeah, raytracing surely will save us all, it's physically correct, isn't it?

P.S. Reading this (and all) my articles is a lot more intresting if you care to follow the links...

23 June, 2008

Begin here

Often, people ask me how to begin programming. Or doing 3d effects. Or how to write elegant C code, how to develop an aestethic. Those questions are really hard to me. I can point you to some great books on C++ programming. Or on programming in general, but nothing for beginners.

How did I? Well I started poking characters on the screen of a commodore 64. Then I learned about the 16th dos interrupt
and the VGA memory starting address (0xa0000). Then I wrote my VESA library, started programming in protected mode, and then eventually moved to Windows and OpenGL. [note: Wikipedia is CRAZY]

Hardly anything I've read those days is useful now. Sadly I don't think that most future rendering engineers will ever learn how to properly draw a triangle in software (I don't really think they do now, and I don't even think most of them know how a GPU does). Most of the sites and resources I used are dead now.

So I don't know. I'm clueless. I know a few good books that I would recomend, but I don't know if they're really good for learning.

What I know is that you should always have fun. Do it for fun, noone starts with the maths. At least if you start young as I did, I don't think you'll really appreciate them, that comes later. And to me maths are fun now, they're much more fun than coding, so I'm still doing everything only for my personal enjoyment.

I would say, start with proce55ing. It's the most fun language I know of, and it's Java basically, so you will do graphics in a mainstream programming language. There are a few tutorials and courses on the processing site itself, there is an incredibly active community, and Java is an incredibly widespread language, you won't have any problems in finding tutorials and books for starters.

Then move to more serious 3d stuff. I would say, C++ or C#, and OpenGL. You could start with the famous NeHe graphics programming tutorials. Another good way is C# and XNA, especially if you have a 360. Then you will need CG to code shaders.

If you reach that level, you can start reading everything. Books. Papers.
Take your time. It could easily require your entire life.

Don't EVER think to know enough. You don't. If you've been programming for 4-5 years and took a 5 years university course, then you will have just the basics that are required to be able to understand almost anything, with some effort. They give you only the alphabet, from there on, there is the real knowledge! That's the single most important advice I can give you.
I've seen countless people making fool of themselves because they actually believed they knew how to do something, while they just knew the basics to start learning how to do it.
And they are evil, because they truly believe they can do it, and they will do it, and it will be wrong. Don't get me wrong, doing things wrong is fine. If you're not writing production code for a company, that is. Well, but that's another story...

I haven't finished to learn, I read every single day new stuff. And I hope I'll always do. And this is why I'm writing this blog as well.

P.S. Oh I forgot. This one looks promising too as a starting language. It seems to be made for the younger ones but I think it will be interesting for everyone. It's based on Ruby.

P.P.S. The "fatmap" article on triangle filling... is just a starting point, more advanced coders will like to add perspective correction to n-sided polygons to avoid the cost of emitting multiple triangles after clipping... ;)

20 June, 2008

ATI Global Illumination videos

Just I couple of things I just found, mostly they are about the new ATI 4800:

Ruby Cinema 2.0 - developer presentation - still image

Scorpion Cinema 2.0 - developer presentation - still image

older stuff:

DirectX 10 "pingpong" demo

DirectX 10 GI demo

off-topic, Pohl shows more of his raytraced game stuff. I think that the code, the results and the presentation are incredibly boring and lame, but anyway here is it, have a look.

p.s. youtube is incredible, there's everything, from visualization of the MLT to tech demos, very interesting, I wish the youtube app on the ipod touch gets better someday...

14 June, 2008

Poll results

So my first poll is over, I didn't think it was going to be surprising, but in some ways it is indeed. Out of 74 votes 28 went to the PC, that so seems to be your favourite development platform, 25 to the Xbox 360, 15 to Playstation 3 and last, with 3 votes each are Nintendo DS and Wii.

I did not include the Playstation Portable in the poll as I really don't think that could ever be the favourite one, DS was included as it's kinda oldschool (there are a lot of ex-amiga programmers and sceners coding on that one), while Wii even if on the graphic side is not fun (and does not come with a GPU debugger), seems to be really simple to work with.

I'm not too surprised by the results PC had, my bet was that the 360 would be the winning one, but I did not unbias the votes considering that people could not have the opportunity to code on it at all (most amateur programmers, despite XNA, are still coding PC-only).
Or at least I think that the PC results are due to the biasing, I can't really see a reason for the PC to win otherwise, the only nice thing you get are somewhat shorter iteration times, but you have to pay the incredibly high price of having to code multiple paths at least for the most common configurations, test a lot, you can't really dig as deeply into optimizations, GPU debugging support is inferior, and DX10 are cool but not really popular, the 360 does not have all the bells and whistles of the newest Microsoft API, but it's still way more powerful than DX9.

I am surprised by the Ps3, I was thinking that everyone was going to be really pissed of by its SDK (that only now can be considered non-offending), it seems that despite the poor work done by Sony, there are still many people recognizing its potential and enjoying finding a way to actually use all that processing power.

Will in the end Ps3 games technically outsmart 360 ones? I doubt it, Cell is powerful, but the overall architecture is not as balanced, refined and well-tought as the 360 one (in my opinion). And the GPU is, mhm, mostly a joke compared to the ATI one. Still, there's plenty of raw power, we'll see...

Pet projects

Many, many programmers that have real jobs also manage to find free time for pursuing personal coding projects.

That's not my case, as I do have a lot of free time, but manage to waste it in numerous other ways (right now they are: playing rockband, playing lego indiana jones, watching montalbano or dr.house or southpark, reading books and papers, writing this blog, sleeping, doing photography).
Among my longest standing projects that weren't I can find: a SIMD raytracer, better if intersecting isosurfaces or something equally exothic and implementing Metropolis light transport (so I don't have to compare it to all the other, more useful, triangle-mesh based raytracers), a shader prototyping framework/DSL (to replace FX Composer) and many other ones, mostly computer art related. Sometime I do some shader programming at home, but anything more complex than that seems not to be able to overcome my laziness.

But assuming you're smarter than me, you might be interested in starting one. But hey, personal projects are about having fun, and learning new things, so why doing it in a boring language-that-you-already-know if you can do it in a new, exciting language? That's an incredible deal: you'll learn a language, you'll finish your project faster (if you choose the language wisely) and you'll probably learn new programming paradigms that will extend your "design pattern" (!!!) library.

What if you don't have any idea about an interesting project to do? Well, fear not my friend. You can achieve the same goal by trying to solve programming challenges. There are a lot of good ones, most of them are language agnostic (like hacker's delight, project euler, code kata), some very good ones do target a single language (and are perfect if you want to learn if), but
usually can also be solved in other languages without any problem (like python challenge, rubyquiz or about.com challenges), others are geared towards a given programming paradigm (like ICPF).

The following are my personal suggestions (after that I'll try not to write about languages for a while, back to rendering stuff...):
• HLSL/CG/Cuda/Rapidmind: Perfectly suited for data parallel algorithms, like image processing or eulerian physics. And of course they're kind good for shaders and realtime graphic stuff too. Shaders are fun, you should really try to get a work that includes coding them. You get all the glam, no crashing bugs, no pointers, fast iteration times, advanced math, lots of research while still caring a lot about low level optimizations too! Also, GPU forces you to code in a data parallel way, but data parallelism is very good also for modern multicore/multithreaded CPUs (Rapidmind compiles for Cell SPU as well, Muda is another interesting project), so it's a good skill to have. For CG/HLSL programming I would reccomend to use FX Composer (not great, but the best we have) or a .Net language like C# via the Tao framework, SlimDX or MD3D10
• OcaML/F#/AliceML: Those are quite general programming languages, OcaML has a compiler that is as optimizing as the most refined C++ ones. But ML languages, even if they are not purely functional, are strongly oriented towards functional programming. I wouldn't write a server in them (but that's only my opinion). They seem to be the perfect environment to write a raytracer into (hint hint! well, nothing new really), even if they miss SIMD intrinsics (as far as I know). They also have a strong reputation for writing program transformers (i.e. compilers). And of course like most functional programming languages they support incremental development a LOT better than C++. I would start with F# as it has a lot of nice features, it's a .Net language that works with VisualStudio, and it's going to be included in the next iteration of the aforementioned IDE. OcaML on the other side is a much older ML dialect, and it also comes with one of the most optimized compilers. I've included also AliceML as I like its implementation of futures, serialization and distributed computing, but I don't know it well enough to reccomend it. Just have a look at it, as it's a good implementation of many nice ideas...
• Scheme: Scheme is a lisp dialect. Even if I've only done projects in Common Lisp, I find it to be to huge, confusing and outdated, so I suggest Scheme instead. Hey! MIT guys are smarter than normal people, and they learn coding with Scheme! Surely there has to be something good in it... You can see that I've not linked "scheme" to schemers.org, as the language standard is so small that it's commonly extended in non-standard ways by every implementation of it. Also the standard itself has a lot of "optinal features" that can be implemented or not by a given compiler. That's why I've directly linked to a specific scheme implementation, drScheme/pltScheme. Lisp should be known for plenty of reasons: it's one of the first programming languages, it's a symbolic language, it's a functional language, it's elegant, it has a strong mathematical foundation, it's a metaprogramming language, many other languages have features that are heavily inspired by it (ruby...). But what about an actual, real project using it? Well of course it's perfectly suited for metaprogramming and writing domain specific embedded languages. Programs that transform themselves. Add to this the ability of modifying code while it runs, and you have a natural platform for doing computational art. There are already a couple of projects doing that, but they don't seem to be very refined to me, there's still plenty of space...
• Python/Ruby: As a scripting language, I prefer Ruby way more than Python. Ruby is really great, it's almost perfect. And it's easy to learn too. But Ruby is slower, way slower, I don't think it's really suited as of now for rendering stuff. There are a lot of numerical and graphical libraries for python, API bindings, it's used as a scripting language in FX Composer (via IronPython) and in other DCC applications. Overall it could be a good choice. It's even used to run SIMD code for on the Cell processor...
• Other languages: There are plenty of them, JavaScript (well, ECMAScript as it's called now) for example is way more "advanced" (it's not Java AT ALL, it's Scheme with a Java-ish syntax applied over it to make marketing dept. happier) than you might think (and commonly used as a scripting language for many DCC applications). Smalltalk is the incarnation of live coding, reflection and object oriented programming is its purest form (and probably had the same influence Lisp had on the design of subsequent languages). Proce55ing is a simple and kinda well-known graphic-oriented java dialect. Io is nice, incredibly simple yet powerful, and comes with OpenGL bindings. Vector languages like APL and J also are worth knowing. Haskell is lazy, and I love that. Forth, oh shit, I even like TCL... And of course you might actually want to base your project on the implementation of a new programming language too (in that case I would start with LLVM and AnTLR)!
Of course, after all that, you might start to appreciate more C#, the features added in the third version of the language, LINQ, Parallel.FX and the huge effort that's going on to evolve that in order to include features that are really nothing new (lisp and more lisp!), but still did not ever find a way into mainstream languages before...
I've already posted plenty of links to cool C# stuff, so I won't repeat them again here.

P.S. all the above does not reflect my personal tastes in programming languages. It's simply stuff that's very useful to learn and to play with! In fact even if I think it's a broken language, I still can on a good day enjoy programming in C++. And even if I love Lisp power and elegance, I still don't think I should implement a language before starting to use it, Lisp is the ultimate metaprogramming tool, but in order to achieve it has almost no syntax, and I love having a syntax when I'm programming, because I'm noob enough to actually enjoy having tools for checking it (that's also why I generally prefer static typing, but I'm also more of a "system" programmer than of a "scripting" one).

08 June, 2008

C++ is premature optimization

UPDATE: further evidence here

When I started programming... well, I was a child, I had a commodore 64 and my programs were usually around thirty lines of code. But when I started doing code more seriously, C++ was not used at all for graphics. PC Demoscene was still using Pascal (in Borland's
Turbo dialect) and Assembly, C (usually using the Watcom compiler) was not the defacto standard yet. Actually I remember that I had to persuade the other coders in my demogroup to use DJGPP (a dos port of the gcc compiler) instead of Pascal (that I did not know anyway, I was using only assembly, C and powerbasic).

Those were times when you could still count how many cycles a routine took, how it was going to be executed by the two pipes (U and V, wow strikingly similar to what we now do with shader code, nice coincidence) of the Pentium processor, were you would expect a cache miss or a branch misprediction (without using a profiler, just looking at a given loop). And you could do a reasonable estimate of that even when coding in C, you could "see" the underlying machine code.

So I know why people love C++. I know why I love it too (when I love it :D).
It's not the best language in the world and we probably all know about that. But it gives us power. Not the kind of power you can feel when you solve a problem in two lines of code, that is something that scripting guys can enjoy (even if they are programming in something as horrible as Perl). Not even the power that you have when you're able to express an algorithm in an incredibly neat and elegant way (SML? OcaML? Haskell?). Nor the kind you feel when you manage to extend your language to give it new and powerfull programming idioms (Lisp? TCL?). Nothing like that, no.
It's the kind of power you get from being in control.

When you code in C++ you can easily see the equivalent C code, you know that virtual functions are equivalent to pointer to a struct of function pointers, you know that templates are glorified macros, etc... And well, nowdays C is our cross-platform version of Assembly, there's no point in using Assembly anyway (check the ID Quake and Doom sources to see how little asm was used even back then!) as we're not better than compilers, with our modern, and incredibly complex CPUs (until recently all the extra transistors we had for making new CPUs went into new and fancy ways of decoding and scheduling instructions, not into more raw computing power) we can't count cycles anymore, we can only give hints to the compiler, design for cache coherency and try to avoid branches.
Even PC hardcore demoscene coders do not use assembly for speed anymore, they use it for size (it's easy to predict the size of each instruction, thus is still possible to be in control of that). Isn't it the same with C++? How many could say to be able to choose the optimal choice of the functions to be inlined? Of the mutex locks to be placed? I guess that Java's hotspot can (or LLVM runtime optimizations etc...). Or if it does not yet, it COULD.
The only exception is SIMD code, our compilers are not very good with that yet...

And so I don't really want here to write down all the problems, design errors, limitations and such of C++. I wanted to, but I realized that I'm not the best person to ask about that.
There are plenty of books on how bad C++ is, and they are called C++ Coding Standards, Effective C++, Exceptional C++, even Design Patterns. All books full of solutions to C++ problems, they mostly talk about how to do some very simple things in C++ in the correct way or what to avoid to do when coding in C++ (most of the times with no exeptions, or with very rare exceptions). Of course, in a good language simple things should be natural to be done in the correct way, things that should not be done should not be possible at all, by default things should work in the most commonly used way etc...

We also have tools to check that we are not doing things in any bad way (and almost every project I've worked into did at least raise warnings as errors).
We don't want to code in C++, we try the best we can to restrict and extend C++ in such a way it's kinda suitable for development of our huge projects. C++ alone, is not, before we can start coding in it we have to ban certain (many!) practices, write libraries (memory allocation, serialization, attribute system - reflection, wrapping of compiler specific extensions like alignment stuff etc) and also write some tools (the most striking example are the "bulk" build ones, that make projects that do not compile our single C++ file, but use the preprocessor to concatenate them into larger one, to manage to get the huge link times caused by the C++ linking model, manageable). And all this only to make C++ usable, to overcome most of its problems, not to make it perfect, that stuff enables us to start coding our project, does not give us the expressive power that we have in other languages.

BUT I don't want to make my point againt C++ based on this, there are many wrong things there's plenty of evidence even if we don't look at the other languages for "cool" features to have. Enough of that (I promise!).

Most people that do still prefer C++ as a language (as opposed to practical considerations on why we still have to use it in our projects that is kinda a different matter) are not ignorant about those considerations (some are, but many are not). They simply like to be in control, no matter what. They need performance, more than anything else. Or they think they do. And they argue that being in control makes our code more performing. And here they are wrong.

We moved from Assembly to C and then to C++ for a reason. Our project were growing and we were not anymore able to control them in such low level languages. The same thing is happening now, we should seriously find alternatives to C++ and not only to be able to do our work while retaining mental sanity, but also because we care about performance. We are not able anymore to write good code and at the same time to dribble all the C++ shortcomings. Code reviews mostly care about catching coding standards infringements, that is, they try to let us stay into the path of our constrained/extended version of C++. They don't deal with the design much, and even less about performance.

Performance that does not mean bothering to replace i++ with ++i (not that's something that you shouldn't do, as it costs you nothing), but first and foremost algorithms, then data access design (cache coherency) and parallelism (multithreading). If we had always such care about those issues, then we could profile and locate a few functions that need to be optimized (SIMD, branchless stuff etc). But most of the times, we find hard even to get something to work. We are surrounded by bugs. We are improductive, even compiling requires too much time. And so we do bad design. I've seen a complex engine being too slow not because there was a single or a few functions that were slow, but because everything was too slow (mostly due to data and code cache misses, in other words, due to the high level design, or lack of "mature" optimizations). We laugh if some new language (with a new, and not so optimized compiler) reachs only 80% of the performance of optimized C++. But we generally don't write optimized C++. We are way under the 80% of an optimized implementation, in most of our code.

We are too much concerned about the small details, and the language quirks, unexpressiveness etc. We are using Assembly. We are optimizing prematurely. We are in control of something we can't control anymore. And it's truely the root of all evil. I would encourage you trying a few languages other than C++. See how more productive they are. See how they let you think about your problem and not about your code. How nice is to have a language that has types and objects, and happens to know about them too (reflection). C# and F# are a good start (but they are only the _start_).

P.S. Not that C++ as a language should be considered "faster" than many others. C++ compilers surely are incredibly good. But for example, as a language C# does have most of the features that enable us to write fast code in C++. The only things that are missing are, in my opinion, some stricter guarantees about memory locality, and the C++ const correctness (well, SIMD instrinsics, memory alignment etc are supported by C++ compilers, but not part of the C++ standard). But in return you get some others nice features that C++ lacks, like a moving garbage collector (that can move memory to optimize locality automatically), runtime optimizations (that the current microsoft .net compiler does not use as far as I know, but that are well possible given the language desing, and that are only barely mimicked by profile-guided optimizers for C/C++), and type safety (that avoids some aliasing issues that make C++ code difficoult to be optimized).

---
Notes: I like those two phrases from here:

...We tend to take this evil for granted, similar to how we take it for granted that people get killed every day by drunk drivers. Fatal accidents are just a fact of life, and you need to look both ways before you cross the street. Such is the life of a C++ or Java programmer...

...I'm also confident that regardless of which language you favored before reading this article, you still favor it now. Even if it's C++. It's hard to re-open a closed mind...

that site is actually incredibly cool, other good reads from there are this and this

Framerate does matter

Game programming is a difficoult beast. We write performance critical code, we are in an incredibly competitive field, well, you probably know about it. The problems we face are not something unique to games, coding in general poses difficoult engineering problems.

That's mainly because software requirements always change, planning much ahead is difficoult, and also because programs fail deterministically, for bugs, not probabilistically, for stress (as in most other engineering disciplines) and there's no redundancy we can add to be safer (actually redudancy adds problems).

In order to try to cope with those difficoulties, some rules, guidelines and strategies are adopted. The most basic of those is to always have a programs that compiles. That's obvious, if we submit a code change that breaks the build, everyone that gets that change will be unable to work. The next obvious step is to have always not only something that compiles, but that actually works too. So we should always test our code, testing can be automatic (unit tests, stress tests, etc) or done by QA guys (or both).

That's both because bugs can hamper work (if someone has to work on a given subsystem, that has to run correctly), but also because smashing bugs usually requires a lot of time, so we don't want to accumulate them at the end of the project.

Those two rules are pretty obvious, and well understood. On the other hand one thing that I've not always seen enforced, is the importance of the framerate. Framerate drops should be considered worse than bugs. Why? Because bugs can be solved usually without changing the data, while performance issues (as any resource overuse) might end up, and usually will if not solved soon, in changing art assets. I've seen this happening many times, projects reaching their deadlines with framerate problems, having no time to experiment different technical solution, thus ending in removing polygons and scaling textures down.

No surprises, please
The framerate problem is just one examples. In general we want to be flexible to change, but know what to change as soon as possible, and trying anyway to minimize the changes. We want to iterate on the game, always leaving it in a consistent state, respecting its invariants. More, we want to try not to touch a given feature, after it's done. So we iterate in order to add complete features.
Why? Because we don't want to have surprises, we can't make a long term plan (only rough estimates, that we will refine while working), because we know that long term, huge designs fail in computer science, so we like working in small iterations, identifing problems soon and being able to react to them soon. During an iteration, it's possible to fail to deliver a feature in the estimated time, but that's not tragic, if we know it as soon as possible.
The worst error we can make, and I know because I've seen projects doing that, is to code ten incomplete features, being confident that they will need a given amount of time at the end of the project to refine them. In those cases, Murphy's laws come to us and whatever time we reserved for finalling the features will be too small in the best case, in the worst, the coded features will simply be not useful in achieving the desidered result, and in the end, they will be completely escluded, wasting work.

That's why almost everyone in gaming, is using Scrum nowadays. Team is splitted into "vertical slices", groups with enough competencies in every role to complete a feature, features are splitted in tasks, tasks that are planned and re-evaluated in short sprints (usually, two weeks). Each task and each feature is quality tested, and considered done only if approved by the users of that feature (usually desingers or artists). Every company has its own version of this process, but anyway, it's nice, it works.

Slicing a team
What I don't really love about that approach is that every group (rendering, game, artists, etc) is sliced and people are assigned to a given sprint team. That is great for cross-functional communication, that I think it's really the key for a succesful game, especially if you're working with rendering. The collaboration between coders and artists is fundamental, it's the single most important thing to achieve great graphics.
On the other hand, rendering is also about cutting edge technology and algorithms, and Scrum risks to tear apart your rendering team, creating a situation where people don't know what other people are doing, or they just know the tasks, but not the technical details, and so they can't share their ideas. Of course if the owner of a given task needs and asks for help, probably he will get the support he needs. But in many cases, you don't know that you need help, you don't know that another engineer could have a better idea, or that you can have a good one about the problems that other people are going to solve.

Changing attitudes
Unfortunately, this kind of communication is mostly an attitude. If you already have it in your team, even if using Scrum, then it's fine. But what if you don't? You can't force people to speak more, hang together and talk about their work, and share ideas. But you can show the way, trying to be non-intrusive.
Good example helps. You could also think of having a quick, technical "show and tell" meeting, maybe once every week or two, or by request, when someone is working on something interesting or hard, that he wants to speak of, to find ideas. A nice thing is if the leads invites people to speak in those meetings, as the lead should know who is doing what, and how does he thinks to solve a given problem, and if that problem is something worth discussion or not. Another thing that I think it's worth experimenting with is to have "code previews". Code reviews are very useful, they catch bugs, distractions, guideline infrigments etc. But they rarely deal with design, usually it's too late for that, and too difficult, as you're reviewing many lines of changed code, and it can be hard to see the overall design and underlying ideas. In those cases, early reviews might help, and I personally do those when designing big subsystems, asking reviews but specifiying that the code is not complete, and that I just wanted to have feedback on the overall structure of the code.

06 June, 2008

Some useful math software

During your daily render coding practice, you might need to do some computations, visualizing functions, simplyfing expressions, computing integrals and partial derivatives, displaying debug data and reflected structures.
There's actually one program to rule them all, Mathematica. Geometry Expressions is very nice too. If you don't have and don't want to buy those, you might try a couple of other programs:

GraphCalc - Good graphical calculator for windows, even if most of the times I use the simpler PowerToy calculator or enter directly the expressions into Launchy...
GraphViz - Graph layout
SciLab - Great free replacement for MatLab
Maxima - Probably the best opensource CAS, this is a nice cheatsheet for it
OpenAxiom - Another CAS with a long history
Sage - Uses Maxima for its CAS functionality, adds some more, and presents it in a different sintax
MayaVi - Powerful 3d graphing system
OctaViz - VTK 3d graphing for the opensource MatLab clone octave
Cinderella - Interactive geometry

Live coding

http://en.wikipedia.org/wiki/Live_coding

Never saw that in games. Some games use scripting languages, but that's kinda different, they are limited usually to AI or gameplay. And usually you can't change a script while it's running. It's a shame because it's surely possible to do that for non interpreted languages as well. It's Microsoft edit-and-continue, a similar stuff in java was implemented by ZeroTurnaround.

p.s. Actually I'm lying, there are many interesting experiments about livecoding for games. But nothing is publicly available, and it's strange considering that livecoding is nothing too strange (lisp... smalltalk!) and it's used in other fields (to make music?!? weird!).

Update: I've found this paper to be an intersting read... It's an edit-and-continue implementation, in C#, out of the IDE, that was used in a 3d Virtual Reality application called goblin. It's using the .net framework, so it doesn't deal with native assembly code, but that could be done as well, the key point is not having a JIT to compile the bytecode, but using a language/compiler that has enough metadata to perform the trick.

Update: This is very intersting as well. Well written

05 June, 2008

Performance does not matter

Well, it's a lie, of course it does, don't flame. But it's not what you should care about in your game/rendering framework. Or better, it's not the first thing you should care about.
Most of the times I've seen slow code, was not due to incompetent programmers (well, MOST of the times) nor due to a too high-level or slow framework (well, again MOST of the times). It was simply because the programmers did not have enough time in the project to optimize it. It's that simple. They knew how to speed it up, they had it into their "to-do" lists, but those ideas simply were never implemented. The same could be said about rendering features.

That should let us think. We come from eras when programs were small and computers did not have much resources. We had to optimize everything, we did use C and assembly. Nowdays, computers are powerful and our codebases are huge. Huge! We need to shift away from the low-level, hand optimize everything mindset. Not because we don't care about performances, but because to care about them we need to be able to iterate faster, to be more productive in our work.
We are writing kinda optimized bad code. We need to write better code (i.e. have the time to test different algorithms and ideas) and then, eventually, dig into low-level optimizations where needed. And the key to do that, is through productivity.

How does my ideal rendering framework look like? Well I'm not too sure but I think it should be...

• Simple. Don't really care about building a huge, high level system with a lot of rendering features. For me, even something that's based on a scenegraph, is kinda bloated. You don't need a graph for most rendering tasks.
• Customizable (code-wise). It should be easy to write custom rendering pipelines. Don't try to make it a generic blackbox, it won't work. Custom code has to be written. Don't bloat. Don't overengineer. Things do change. Don't go too big.
• Designed for fast iteration. Data driven? Yes, but without overengineering it. Again, don't think about a huge, generic system, driven by parameters. But we do like to have a generic parameter system, to be able to easily drive systems with data, reflect it, save it, etc. Live tweaking (i.e. parameters, data)? Nice. Hot swapping (objects, shaders, textures)? That would be great. Live coding? Well, now maybe I'm dreaming...
• Designed to be optimizABLE. Don't pessimize prematurely (i.e. don't do stupid things, that could be avoided without wasting any additional time). Do mature optimizations (i.e. care about performance where it has an impact in the overall design, that means, have a design that is not memory unfriendly for example, and that is not thread unfriendly)
• Designed avoid bugs, designed to catch bugs (runtime checks, automated testing). Bugs are waste of time.
• Integrated with tools, and artist pipeline. Tweaks done in the engine should be reflected into artists DCC programs. Tweaks done in DCC programs should be reflected into the engine. Ideally, data should be unique, external stuff, that can be modified by different clients, and that is managed by tools, versioning it, and converting it.
Note: that is probably not too far away from worse-is-better, I've also found interesting the futurist programming manifesto.