Search this blog

26 October, 2016

Over-engineering (the root of all evil)

Definitions

Over-engineering: using prematurely for tools, abstractions or technical solutions, resulting in wasted effort and unnecessary complexity.


When is a technique used prematurely? When it doesn't solve a concrete, current problem. It is tempting to define good engineering in terms of simplicity or ease of development, but I think it's actually a slippery slope. Simplicity means different things to different people. 


One could see programming as compression (and indeed it is), but we have to realize that compression, or terseness, per se, is not a goal. The shortest program possible is most often not the simplest for people to work with, and the dangers of compression are evident to anybody that had to go through a scientific education: when I was in university the exams that were by far the most difficult came with the smallest textbooks...


Simplicity also means different things to different people. To someone in charge of low-level optimizations working with fewer abstractions can be easier than having to dive through many software layers. To a novice, a software written in a more idiomatic way for a given language might be much easier than something adapted to be domain specific.


Problems, on the other hand, measurable, concrete issues, are a better framework. 

They are still a soft, context-dependent and team-dependent metric, but trying to identify problems, solutions, and their costs brings design decision from an aesthetic (and often egocentric) realm to a concrete one.
Note: This doesn't mean that good code is not or shouldn't be "beautiful" or elegant, but these are not goals, they are just byproducts of solving certain problems the code might have.
Also, "measurable" does not mean we need precise numbers attached to our evaluations, in practice, most things can't be better than fuzzy guesses, and that's perfectly fine.

Costs and benefits


Costs are seldom discussed. If a technique, an abstraction, an engineering solution doesn't come with drawbacks, it's probably because either it's not doing much, or because we've not been looking hard enough. 

  • Are we imposing a run-time cost? What about the debug build? Are we making it less usable?
  • Are we increasing build-times, lowering iteration times? 
  • A human one, in terms of complexity, obfuscation, ability to on-board new engineers?
  • Are we making the debugging experience worse? Profiling?
  • Do our tools support the design well? Are we messing up with our source control, static analyzer and so on?
  • Are we decreasing code malleability? Introducing more coupling, dependencies? 
  • Reducing the ability to reason locally about code? Making details that matter in our context hidden at call-site? Making semantics less explicit, or less coupled to a given syntax? Violating invariants and assumptions that our code-base generally employs?
  • Does it work well with the team culture? With code reviews or automated testing or any other engineering practice of the team?
We have to be aware of the trade-offs to discuss an investment. But our tendency to showcase the benefits our ideas and hide the costs is a real issue in education, in research, in production. It's hardwired in the way we work. It's not (most often) even a matter of malice, it's simply the way we are trained to reason, we seek success and shy away from discussing failure.

I've seen countless time people going on stage, or writing articles and book, honestly trying to describe why given ideas are smart and can work while totally forgetting the pain they experience every day due to them.


Under-engineering


And that's why over-engineering truly is the root of all evil. Because it's vicious, it's insidious, and we're not trained at all to recognize it. 


It is possible to go out and buy technical books, maybe go to a university, and learn tens or hundreds of engineering techniques and best practices. On the other hand, there is almost nothing, other than experience and practice, that teaches restraint and actual problem solving.


We know what under-engineering is, we can recognize duplicated code, brittle, and unsafe code, badly structured code. We have terminology, we have methodologies. Testing, refactoring, coverage analysis...


In most of the cases, on the other hand, we are not trained to understand over-engineering at all.

Note: In fact over-engineering it's often more "pronounced" in good junior candidates, whose curiosity lead them to learn lots of programming techniques, but that have no experience in their pitfalls and can easily stray from concrete problem solving.

This means most of the times when over-engineering happens it tends to persist, we don't go back from big architectures and big abstractions to simpler systems, we tend to build on top of them. Somewhere along the road, we made a bad investment, with the wrong tradeoffs, but now we're committed to it.


Over-engineering tends to look much more reasonable, more innocent than under-engineering. It's not bad code. It's not ugly code. It's just premature and useless, we don't need it, we're paying a high price for it, but we like it. And we like technology, we like reading about it, keeping ourselves up-to-date, adopting the latest techniques and developments. And at a given point we might even start thinking that we did make the right investment, that the benefits are worth it, especially as we seldom have objective measures or our work and we can always find a rationalization of almost any choice.


I'd say that under-engineering leads to evident technical debt, while over-engineering creates hidden technical debt, which is much more dangerous. 


The key question is "why?". If the answer comes back to a concrete problem with a positive ROI, then you're probably doing it right. If it's some vague other quality like "sharing", "elegance", "simplicity", then it's probably wrong, as these are not end goals.

When in doubt, I find it's better to err on the side of under-engineering, as it tends to be more productive than the opposite, even if it is more reviled.


"Premature optimization is the root of all evil" - Hoare, popularized by Knuth.

I think over-engineering is a super-set of premature optimization. In the seventies, when this quote originated, that was the most common form of this more "fundamental" evil.
Ironically, this lesson has been in the decades so effective that nowadays it actually helps over-engineering, as most engineers read it incorrectly, thinking that in general performance is not a concern early on in a project.


Intermission: some examples


- Let's say we're working on a Windows game made in Visual Studio. Let's say that you are using a Visual Studio solution and it's done badly, it uses absolute paths and requires the source-code and maybe some libraries to be in a specific directory tree on the hard drive. Anybody can tell that's a bad design, and the author might be scorned for such an "unprofessional" choice, but in practice, the problems that it could cause are minimal and can be trivially fixed by any programmer.


On the other hand, let's say we started using, for no good reason, a more complex build system, maybe packages and dependencies based on a fancy new external build tool of the week.

The potential cost of such a choice is huge because chances are that now many of your programmers aren't very familiar with this system, it's bringing no measurable benefits but now you've obfuscated an important part of your pipeline. Yet, it's very unlikely that such decision will be derided.

- Sometimes issues are even subtler, because they involve non-obvious trade-offs. A fairly hard-coded system might be painful in terms of malleability, maybe doing changes in this subsystem requires every time editing lots of source files even for trivial operations.


We really don't like that, so we replace this system with a more generic, data-driven one which allows to do everything live, doesn't even require to recompile code anymore. But say that such system was fairly "cold", and the changes were actually infrequent. Suppose also that the new system takes a fair amount more code and now our entire build is slower. We ended up optimizing a workflow that was infrequent but on the down side we slowed down the daily routine of all our programmers on the team...

- Let's say you use a class where you could have used a simple function. Maybe you integrate a library, where you could have written a hundred lines of code. You use a templated container library where you could have used a standard array or ad-hoc solutions. You were careless and now your system is becoming more and more coupled at build-time due to type dependencies. 

It's maybe a bit slower in runtime than it could be or it makes more dynamic allocations than it should, or it's slow in debug builds, and it makes your build time longer while being quite obscure when you actually have to step in this library code.

This is a very concrete example, happens often yet chances are that none of this will be recognized as a design problem, and we often see complex tools built on top over-engineered designs to "help" solving their issues. So now you might use "unity builds" and distributed builds to try to remedy the build time issues. You might start using complex memory allocators and memory debuggers to track down what's causing fragmentation and so on and so forth. 


Over-engineering invites more over-engineering. There is this idea that a complex system can be made simpler by building more on top of it, which is not very realistic.


Specialization and constraints


I don't have a universal methodology for evaluating return on investment, once the costs and benefits of a given choice are understood. And I think there isn't in general one because this metric is very context sensitive. What I like to invite engineers to do is to think about the problem, be acutely aware of it.


One of the principles I think is useful as a guidance is that we operate with a finite working set: we can't pay attention to many things at the same time, we have to find constraints that help us achieve our objectives. In other words, our objectives guide how we should specialize our project.


For example, in my job I often deal with numerical algorithms, visualization, and data exploration. I might code very similar things in very different environments and very different styles depending on the need. If I'm exploring an idea, I might use Mathematica or Processing. 

In these environments, I really know little about the details of memory allocations and the subtleties of code optimization. And I don't -want- to know. Even just being aware of them would be a distraction, as I would naturally gravitate towards coding efficient algorithms instead of just solving the problem at hand.

Often times my Mathematica code actually leaks memory. I couldn't care less when running an exploratory task overnight a workstation with 92 gb of ram. The environment completely shields me from these concerns and this is perfect, it allows me to focus on what matters, in that context. I write some very high-level code, and somehow magic happens.


Sometimes I have to then port these experiments to production C++ code. In that environment, my goals are completely different. Performance is so important to us that I don't want any magic, I want anything that is even remotely expensive to be evident in the location where it happens. If there was some magic that worked decently fast most of the times, you can be sure that the problems it creates would be lost until there are so many locations where that happens that the entire product falls apart.


I don't believe that you can create systems that are extremely wide, where you have both extremely high-level concerns and extremely low-level ones, jack-of-all-trades. Constraints and specialization are key to software engineering (and not only), they allow us to focus on what matters, keeping the important concerns in our working set and to perform local reasoning on code.


All levels


Another aspect of over-engineering is that it doesn't just affect minute code design decisions or even just coding. In general, we have a tendency to do things without proper awareness, I think, of what problems solve for us and what problem they create. Instead, we're often guided either by a certain aesthetic or certain ideals of what's good.


Code sharing for example and de-duplication. Standards and libraries. There are certain things that sometimes we consider intrinsically good, even when we have a history of failures from which we should learn. 


For engineering, sharing in particular is something that comes with an incredible cost but that is almost always considered a virtue per se, even by teams which have experience actually paying the price in terms of integration costs, of productivity costs, of code-bloat and so on, it came to be just considered "natural".


"Don't reinvent the wheel" is very true and sound. But "the wheel" to me means "cold", infrastructural code that is not subject to iteration, that doesn't need specialization for a given project. 

Thinking that sharing and standardization is always a win is like thinking that throwing more people at a problem is always a win, or that making some code multithreaded is always a win, regardless of how much synchronization it requires and how much harder it makes the development process...

In a videogame company, for example, it's certainly silly to have ten different math libraries for ten different projects. But it might very well not be silly to have ten different renderers. Or even twenty for what matters, rendering is part of the creative process, it's part of what we want to specialize, to craft to a given art-direction, given project scope and so on.


People


Context doesn't matter only on a technical level, but also, or perhaps even more, on a human level. Software engineering is a soft science!


I've been persuaded of this having worked for a few different projects in a few different companies. Sometimes you see a company using the same strategy for similar projects, only to achieve very different results. Some other times on the other hands, similar results are obtained by different products in different companies by employing radically different, almost opposite strategies. Why is that?


Because people matter more than technology. And this is perhaps the thing that we, as software engineers, are trained the least to recognize. People matter more than technology.


A team of veterans does not work the same as a team that has or needs a lot of turnover. In the game industry, in some teams innovation is spearheaded by engineers, in some others, it's pushed by artists or technical artists.

A given company might want to focus all their investment in a few, very high profile products, where innovation and quality matters a lot. Another might operate by producing more products and trying to see what works, and in that realm maybe keeping costs down matters more.

Even the mantras of sharing and avoiding duplication are not absolute. In some cases, duplication actually allows for better results, e.g. having a separate environment for experimentation than final production. In some cases sharing stifles creativity, and has upkeep costs that overall are higher than the benefits.


It's impossible to talk about engineering without knowing costs, benefits, and context. There is almost never a universally good solution. Problems are specific and local.

Engineering is about solving concrete problems in a specific context, not jumping carelessly on the latest bandwagon.

Our industry I feel, still has lots to learns.

21 October, 2016

Cinematography, finally!

I made some screen grabs from the recently released Red Dead Redemption 2 announcement trailer.

These were quite hastily and heavily processed (sorry) in an attempt to make the compression artifacts from the video show less.

Basically these are an average of 5-6 frames, a poor man's temporal anti-aliasing let's say (and the general blurring helps not to focus on the general balance of the image instead of being distracted by small details), and tons of film grain to hide blocking artifacts.

I am in love. Tasteful, amazing frames that don't look like something that was just arbitrarily color-graded to make it seem somewhat cool last minute like a bad Instagram filter.

Enjoy. Ok, let me spend a couple more words...

Some made this to be a technical comparison between titles. In a way it is, but it's not a comparison on resolution and framerate, or amount of leaves rendered, texture definition and so on.

In all these accounts I'm sure there are plenty of engines that do great, even better than RDR2, especially engines made to scale to PC.

Thing is, who cares? This shouldn't matter anymore. We're past the point where merely draw more pixels or more triangles is where technology is at. We're not writing tri-fillers, we're making images. Or at least, we should be.

Technology should be transparent. Great rendering should not show, great rendering happens when you can't point your finger and name a given feature, or effect implementation. We are means to and end, the end is what matters. And it is a matter of technology, or research and of optimization. But a different one.

It is absolutely easier to push a million blades of grass on screen than to make a boring, empty room that truly achieves a given visual goal.

It is absolutely easier to make a virtual texturing system that allows for almost one-to-one texel to pixel apparent resolution than to accurately measure and reproduce a complex material.

It is absolutely easier to make a hand optimized, cycle-tight uber post-effect pipeline than to write artists tools for actual rapid iteration and authoring of color variants in a scene.

I'm not claiming that this particular game does any of these things right. Maybe it's just great art direction. Maybe it's achieved by lots of artists iterating on the worst engine and tools possible. Or maybe it's years of R&D, it's very hard to tell, even more from a teaser-trailer (albeit it totally looks legit real-time on the platform). But that's the point, great rendering doesn't show, it just leaves great images.

And I hope that  more and more games (RDR is certainly not the only one) and art departments and engineers do realize that, the shift we're seeing. Rendering hasn't "peaked" for sure, but we need to shift our focus more and more towards the "high level".

We solve concrete problems, technology otherwise is useless. And, yes, that means that you can't do good tech without artists because you won't ever have great images without great art. Working in isolation is pointless.

And now... Enjoy!















The first Red Dead Redemption also holds up quite well all things considered... These are respectively from the introduction cinematic and one of the first missions:



And as "bonus images", some screens I've found from the recent Battlefield 1 and The Witcher 3.




01 October, 2016

More (silly) tone-mapping ideas

As a follow-up to my previous post I'll show here two more tone-mapping tricks. They are both "silly" in a way, they were done quickly and mostly while waiting for more important things to build, so they are far from "production quality".

They both are somewhat inspired by photography (but make no effort at all to simulate photographic effects, rather, they take inspiration from observing how certain things work), and I believe they could all be part (including the ideas of the previous post) of a single tone-mapping operator. 
All dynamic range compression algorithms have trade-offs and artifacts, so I believe it's not unwise to layer various methods each doing just a bit of compression, to achieve an overall larger reduction.

"Adaptive ND-Filter"

The idea of doing this test came from a discussion with my Italian friend Manuele Bonanno, as we were chatting about Bart's TM post (which I linked in the previous article).

In landscape photography is not uncommon to use graduated neutral density filters. A ND filter reduces all wavelengths of light equally, darkening the image without shifting colors, and can be used for particular effects like very long exposures or to be able to use wide lens apertures in daylight. 
A graduated ND filter does the same but using a gradient, and it's used typically to darken the sky (top half) relative to the other elements of the image.


A GND filter - from Tiffen Filters marketing material

The interesting thing to notice is that even if in photography these filters are made with a fixed gradient, in practice the effect is smooth enough that it can make a big difference on the final image without being noticeable. 
Our vision system is not good at detecting very low-frequency gradients in images, so after a given spatial frequency we can "cheat" liberally without being able to see artifacts.

How to apply this in computer graphics is rather obvious: let's just blur everything a lot and use that as a base for a localized tone-mapping operator (in the test below, I just scale the exposure by the blurred version of the image a bit, then apply global TM as usual).

Doing local tone-mapping with a gaussian blur average usually yields pretty strong haloing artifacts (and a lot of photographic HDR toning software actually is plagued by such halos), and my previous article tried to show an alternative that is both stupidly cheap and that can't result in haloing (as it's not using neighboring operators at all).


Without ND
With ND (save the images and A/B them by alternating the two)

The "insight" here is that you can also prevent haloing if you use gaussian blur, but you blur enough not to be at frequencies where haloing shows. And, of course, if you have a good quality bloom/veil effect, you can re-use the image pyramids from that almost directly. Just remember to apply a -neutral- (grayscale) ND!


The gaussian filter approximated with image pyramids

Used tastefully and in a temporally stable way I think it can work, and indeed my colleague Michal confirmed that he know of titles doing something similar. 
I did a quick test using COD:AW and the effect indeed works really well, actually even better indoors than outdoors often, reducing the amount of "auto-exposure" needed (which we don't push very far typically anyways).

Film Grain

What? Film grain is not about dynamic range, right? It's usually an artistic effect, at best it can be seen as a dithering method that can help avoiding Mach band artifacts. Right?


A detail from a film photo by Trent Parke

Well, not really. If you look at an actual photo you'll notice that grains mostly show in midtones, or rather, that pure whites and pure blacks either saturate grains or have no grains exposed, and thus appear as solid regions.


Thresholded version

This means that if we look at a thresholded version of a good film scan we can see a spatial dithering in the threshold. 
We have some black areas that have black grains but are not fully saturated with black, and then we have "blacker than black" areas where the same black grains appear spatially clustered to create an entirely saturated region (and the same happens in the highlights as well). 

This suggests that we can use dithering patterns not only to get more precision in the midtones, but also to extend a bit the range of highlights and shadows. If a dither pattern is symmetric and equally distributed around zero (as it should be), sometimes it will add luminosity and sometimes it will subtract it. 
So if an area is white, but not whiter-than-white, the times the pattern subtracts it will create darker pixels. In areas that are constantly whiter than the intensity of the dither, even after subtraction, we will still end up with a full white, and we will see saturated areas. 



The image above is a quick test of the theory. On the left, it uses just the simple, old good Reinhard tone-mapping (with black and white levels used to get contrast). The image on the right uses the exact same tone-mapping but also adds som RGB film grain. 
If you look at the shadows and the highlights you can notice that there is a bit more detail visible in the film-grain version (e.g. count the number of visible beams in the glass ceiling).

This is just a quick test, I tweaked the film grain actually to be less intense in the midtones (around 0.5) than near the extremes (0 and 1) to be able to push it a bit further without being too intense over the entire image. It can certainly be done in much better ways, but as far as silly experiments go, I'm rather happy. 
If you really wanted to be cheating, you could even just add grain in the shadows, which would definitely be a hack as you're adding energy but hey... it's not that we don't routinely do that (e.g. bloom/veil usually doesn't redistribute energy, it just adds it...)

P.s. real film grain is actually not easy to emulate (and I'm not trying to, here). This is the only attempt I know of.

11 September, 2016

Tone mapping & local adaption

My friend Bart recently wrote a series of great articles on tone mapping, showing how localized models can be useful in certain scenarios, and I strongly advise to head over to his blog to read that one first, which motivated me to finally dump some of my notes and small experiments on the blog...

Local tone mapping refers to the idea of varying the tone mapping per pixel, instead of applying a single curve globally on the frame. Typically the curve is adapted by using some neighborhood-based filter to derive a localized adaption factor, which can help preserve details in both bright and dark areas of the image.

At the extreme, local operators give the typical "photographic HDR" look, which looks very unnatural as it's trying to preserve as much texture information possible at the detriment of brightness perception.

HDR photography example from lifehacker

While these techniques help display the kind of detail we are able to perceive in natural scenes, in practice the resulting image only exacerbates the fact that we are displaying our content on a screen that doesn't have the dynamic range that natural scenes have, and the results are very unrealistic.

That's why global tonemapping is not only cheaper but most often appropriate for games, as we aim to preserve the perception of brightness even if it means sacrificing detail.
We like brightness preservation so much in fact, that two of the very rare instances of videogame-specific visual devices in photorealistic rendering are dedicated to it: bloom and adaption.


Veiling glare in COD:AW

Most of photorealistic videogames still, and understandably, borrow heavily from movies, in terms of their visual language (even more so than photography), but you will never see the heavy-handed exposure adaption videogames employ, in movies, nor you'll see what we usually call "bloom", normally (unless for special effect, like in a dreamy or flashback sequence).

Of course, a given amount of glare is unavoidable in real lenses, but that is considered an undesirable characteristic, an aberration to minimize, not a sought-after one! You won't see a "bloom" slider in lightroom, or the effect heavily used in CG movies...


An image used in the marketing of the Zeiss Otus 24mm,
showing very little glare in a challenging situation

Even our eyes are subject to some glare, and they do adapt to the environment brightness, but this biological inspiration is quite weak as a justification for these effects, as no game really cares to simulate what our vision does, and even if that was done it would not necessarily improve perceived brightness, as simulating a process that happens automatically for us on a 2d computer screen that then is seen through our eyes, doesn't trigger in our brain the same response as the natural one does!


Trent Parke

So it would seem that all is said and done, and we have a strong motivation for global tonemapping. Except that, in practice, we are still balancing different goals!
We don't want our contrast to be so extreme that nothing detail can be seen anymore in our pursuit of brightness representation, so typically we tweak our light rigs to add lights where we have too much darkness, and dim lighting where it would be too harsh (e.g. the sun is rarely its true intensity...). Following cinematography yet again.

Gregory Crewdson

How does local tone mapping factor in all this? The idea of local tone mapping is to operate the range compression only at low spatial frequency, leaving high-frequency detail intact. 
One one hand this makes sense because we're more sensitive to changes in contrast at high-frequency than we are at lower frequencies, so the compression is supposed to be less noticeable done that way.

Of course, we have to be careful not to create halos, which would be very evident, so typically edge-aware frequency separation is used, for example employing bilateral filters or other more advanced techniques (Adobe for example employs local laplacial pyramids in their products, afaik).

But this kind of edge-aware frequency separation can be also interpreted in a different way. Effectively what it's trying to approximate is an "intrinsic image" decomposition, separating the scene reflectance from illuminance (e.g. Retinex theory).

So if we accept that light rigs must be tweaked compared to a strict adherence to measured real scenes, why can't we directly change the lighting intensity with tone mapping curves? And if we accept that, and we understand that local tonemapping is just trying to decompose the image lighting from the textures, why bothering with filters, when we have computed lighting in our renderers?

This is a thought that I had for a while, but didn't really put in practice, and more research is necessary to understand how exactly one should tone-map lighting. But the idea is very simple, and I did do some rough experiments with some images I saved from a testbed.

The decomposition I employ is trivial. I compute an image that is the scene as it would appear if all the surfaces where white (so it's diffuse + specular, without multiplying by the material textures) and call that my illuminance. Then I take the final rendered image, divide it by the illuminance to get an estimate of reflectance.

Note that looking at the scene illuminance would also be a good way to do exposure adaption (it removes the assumption that surfaces are middle gray), even if I don't think adaption will ever by great done in screen-space (and if you do so, you should really look at the anchoring theory), better to look at more global markers of scene lighting (e.g. light probes...).


Once you compute this decomposition, you can then apply a curve to the illuminance, multiply the reflectance back in, and do some global tone mapping as usual (I don't want to do -all- the range compression on the lighting). 


Standard Reinhard on the right, lighting compression on the left.
The latter can compress highlights more while still having brighter mids.

I think with some more effort, this technique could yield interesting results. It's particularly tricky to understand what's a proper midpoint to use for the compression, and what kind of courve to apply to the lighting versus globally on the image, and how to do all this properly without changing color saturation (an issue in tone mapping in general), but it could have its advantages.

In theory, this lies somewhere in between the idea of changing light intensities and injecting more lights (which is still properly "PBR"), and the (bad) practices of changing bounce intensity and other tweaks that mess with the lighting calculations. We know we have to respect the overall ratios of shadow (ambient) to diffuse light, to specular highlights (and that's why tweaking GI is not great). 
But other than that, I don't think we have any ground to say what's better, that's to say, what would be perceptually closer to the real scene as seen by people, for now, these are all tools for our artists to try and figure how to generate realistic images.

I think we too often limit ourselves by trying to emulate other mediasIn part, this is reasonable, because trying to copy established visual tools is easier both for the rendering engineer (less research) and for the artists (more familiarity). But sooner or later we'll have to develop our own tools four our virtual worlds, breaking free of the constraints of photography and understanding what's best for perceptual realism.

"Conclusions"

Adding lights and changing the scenes is a very reasonable first step towards dynamic range compression: effectively we're trying to create scenes that have a more limited DR to begin with, so we work with the constraints of our output devices.

Adding fill lights also can make sense, but we should strive to create our own light rigs, there's very little reason to constrain ourselves to point or spots for these. Also, we should try to understand what could be done to control these rigs. In movies, they adapt per shot. In games we can't do that, so we have to think of how to expose controls to turn these on and off, how to change lighting dynamically.

Past that, there is tone-mapping and color-grading. Often I think we abuse of these because we don't have powerful tools to quickly iterate on lighting. It's easier to change the "mood" of a scene by grading rather than editing light groups and fill lights, but there's no reason for that to be true! 
We should re-think bloom. If we use it to represent brightness, are the current methods (which are inspired by camera lenses) the best? Should we experiment with different thresholds, with maybe subtracting light instead of just adding it (e.g. bring slightly down the scene brightness in a large area around a bright source, to increase contrast...).
And also, then again, movies are graded per shot, and by masking (rotoscoping) areas. We can easily grade based on distance and masks, but it's seldom done.

Then we have all the tools that we could create beyond things inspired by cinematography. We have a perfect HDR representation of a scene. We have the scene components. We can create techniques that are impossible when dealing with the real world, and there are even things that are impractical in offline CGI which we can do easily on the other hand. 
We should think of what we are trying to achieve in visual and perceptual terms, and then find the best technical tool to do so.

28 August, 2016

A retrospective on Call of Duty rendering

In my last post I did a quick recap of the research Activision published at Siggraph 2016. As I already broke my long-standing tradition of never mentioning the company I work for on my blog, I guess it wouldn't hurt, for completeness sake, to do a short "retrospective" of sorts, recapping some of the research published about the past few Call of Duty titles. 

I'll to remark, my opinion on the blog is as always personal, and my understanding of COD is very partial as I don't sit in production for any of the games.

I think COD doesn't often do a lot of "marketing" of its technology compared to other games, and I guess it makes sense for a title that sells so many copies to focus its marketing elsewhere and not pander to us hardcore technical nerds, I love the work Activision does on the trailers in general (if you haven't seen the live-action ones, you're missing out), but still it's a shame that very few people consider what tricks come into play when you have a 60hz first person shooter on ps3.


Certainly, -I- didn't! But my relationship with COD is also kinda odd, I was fairly unaware/uninterested in it till someone lent me the DVD I think of MW2 for 360 a long time after release, and from there I binge-played MW1 and Black Ops... Strictly single-player, mostly for their -great- cinematic atmosphere (COD MW2 is probably my favorite in that regard of previous-gen, together with Red Dead Redemption), and never really trying to dissect their rendering.
By the way, if you missed on COD single player, I think this critique of the campaign over the various titles is outstanding, worth your time.

Thing is, most games near the end of the 360/ps3 era went on to adopt new deferred rendering systems, often without, in my opinion, having solid reasons to do so. 

COD instead stayed on a "simple" single-pass forward rendering, mostly with a single analytic light per object. 
Not much to talk about at Siggraph there (with some exceptions), but with a great focus on mastering that rendering system: aggressive mesh splitting per lights and materials (texture layer groups), an engine -very- optimized to emit lots of drawcalls, and lightmapped GI (which manages to be quite a bit better than most of no-GI many-light deferred engines of the time).

Modern Warfare 2

IMHO a lesson on picking the right battles and mastering a craft, before trying to add a lot of kitchen-sink features without much reasoning, which is always very difficult to balance in rendering (we all want to push more "stuff" in our engines, even when it's actually detrimental, as all unneeded complexity is).

Call of Duty: Ghosts

The way I see Ghosts is as a transition title. It's the first COD to ship on next-gen consoles, but it still had to be strong on current-gen, which is to be expected for a franchise that needs a large install base to be able to hit the numbers it hits. 

Developers now had consoles with lots more power, but still had to take care of asset production for the "previous" generation while figuring out what to do with all the newfound computational resources.

For Ghosts, Infinity Ward pushed a lot on the geometrical detail rather than doing much more computation per pixel and that makes sense as it's easier to "scale" geometry than it is to fit expensive rendering systems on previous-gen consoles. This came with two main innovations: hardware displacement mapping and hardware subdivision (Catmull-Clark) surfaces.



Both technologies were a considerable R&D endeavor and were presented at Siggraph, GDC and in GPU Pro 7.

Albeit both are quite well known and researched, neither was widely deployed on console titles, and the current design of the hardware tessellator on GPUs is of limited utility, especially for displacement, as it's impossible to create subdivision patterns that match well the frequency detail of the heightmaps (this is currently, afaik, the state-of-the-art, but doesn't map to tiled and layered displacement for world detail).



Wade Brainerd recently presented with Tim Foley et al. some quite substantial improvements for hardware Catmull-Clark surfaces and made a proposal for a better hardware tessellation scheme at this years' Open Problems in Computer Graphics.

A level in COD:Ghosts, showcasing displacement mapping

For the rest, Ghosts is still based on mesh-splitting single-pass forward shading, and non-physically based models (Phong, that the artists were very familiar with).

Personally, I have to say that IW's artists did pull the look together and I Ghosts can look very pretty, but, in general, the very "hand-painted" and color-graded look is not the art style I prefer.

Call of Duty: Advanced Warfare

Advanced Warfare is the first COD to have its production completely unrestricted by "previous-gen" consoles, and compared to Ghosts it takes an approach that is almost a complete opposite, spending much more resources per pixel, with a completely new lighting/shading/rendering system.

At its core is still a forward renderer but now capable of doing many lights per surface, with physically based shaders. It does even more "mesh splits" (thus more drawcalls) and generates a huge amount of shader permutations to specialize rendering exactly for what's needed by a given piece of geometry (shadows, lighting, texturing and so on...).

We did quite a bit of research on the fundamental math of PBR for it and it employs an entirely new lightmap baking pipeline, but where it really makes a huge difference, in my opinion, is not on the rendering engine per se, but on its keen dedication to perceptual realism.



It's a physically based renderer done "right": doing PBR math right is relatively "easy", teaching PBR authoring to artists is much harder (arguably, not something that can be done over a single product, even) but making sure that everything makes (perceptual) sense is where the real deal is, and Sledgehammer's attitude during the project was just perfect.

Math and technology of course matter, and checking the math against ground-truth simulations is very important, but you can do a perfectly realistic game with empirical math and proper understanding of how to validate (or fit) your art against real-world reference, and you can do on the other hand a completely "wrong" rendering out of perfectly accurate math...

"Squint-worthy" is how Sledgehammers's rendering lead Danny Chan calls AW's lighting quality.
Things make sense, they are overall in the right brightness ratios

Advanced Warfare did also many other innovations, Jorge Jimenez authored a wonderful post-effect pipeline (he really doesn't do anything if it's not better than state of the art!) and AW also took a new version of our perpetually improving performance capture (and face rendering) technology, developed in collaboration with ICT

But I don't think that any single piece of technology mattered for AW more than the studio's focus on perceptual realism as a rendering goal. And I love it!

Call of Duty: Black Ops 3 

I won't talk much about Black Ops 3, also because I already did a post on Siggraph 2016 where we presented lots of rendering innovations done during the previous few years. 
The only presentation I'd like to add here, which is a bit unrelated to rendering, is this one: showing how to fight latency by tweaking animations.

Personally I think that if Sledgehammer's COD is great in its laser focus on going very deep exploring a single objective, Black Ops 3 is just crazy. Treyarch is crazy! Never before I've seen so many rendering improvements pushed on such a large, important franchise. Thus I wouldn't know really where to start... 

BO3 notably switched from forward to deferred, but I'd say that most of what's on screen is new, both in terms of being coded from scratch and often times also in terms of being novel research, and even behind the scenes lots of things changed, tools, editors, even the way it bakes GI is completely unique and novel.



If I had to pick I guess I can narrow BO3 rendering philosophy down to unification and productivity. Everything is lit uniformly, from particles to volumetrics to meshes, there are a lot less "rendering paths" and most of the systems are easier to author for.


All this sums up to a very coherent rendering quality across the screen, it's impossible to tell dynamic objects from static ones for example, and there are very few light leaks, even on specular lighting (which is quite hard to occlude).

Stylistically I'd say, to my eyes, it's somewhere in between AW and Ghosts. It's not quite as arbitrarily painted as Ghosts, and it is a PBR renderer done paying attention to its accuracy, but the data is quite more liberally art directed, and the final rendered frames lean more towards a filmic depiction than close adherence to perceptual realism.

Call of Duty: Infinite Warfare

...is not out yet, and I won't talk about its rendering at all, of course!

It's quite impressive though to see how much space each studio has to completely tailor their rendering solutions to each game, year after year. COD is no Dreams, but compared to titles of similar scope, I'd say it's very agile.

So rest assured, it's yet again quite radically different than what was done before, and it packs quite a few cute tricks... I can't imagine that most of them won't appear at a Siggraph or GDC, till then, you can try to guess what it's trying to accomplish, and how, from the trailers!



Three years, three titles, three different rendering systems, crafted for specific visual goals and specific production needs. The Call of Duty engines don't even have a name (not even internally!), but a bunch of very talented, pragmatic people with very few artificial constraints put on what they can change and how.