Search this blog

28 June, 2014

Stuff that every programmer should know: Data Visualization

If you're a programmer and you don't have visualization as one of your main tools in your belt, then good news, you just found how to easily improve your skill set. Really it should be taught in any programming course.

Note: This post won't get you from zero to visualization expert, but hopefully it can pique your curiosity and will provide plenty of references for further study.

Visualizing data has two main advantages compared to looking at the same data in a tabular form. 

The first is that we can pack more data in a graph that we can get by looking at numbers on screen, even more if we make our visualizations interactive, allowing explorations inside a data set. Our visual bandwidth is massive!

This is useful also because it means we can avoid (or rely less on) summarization techniques (statistics) that are always by their nature "lossy" and can easily hide important details (the Anscombe's quartet is the usual example).

Anscombe's quartet, from wikipedia. Data has the same statistics, but clearly different when visualized

The second advantage, which is even more important, is that we can reason about the data much better in a visual form. 

0.2, 0.74, 0.99, 0.87, 0.42, -0.2, -0.74, -0.99, -0.87, -0.42, 0.2

What's that? How long do you have to think to recognize a sine in numbers? You might start reasoning about the simmetries, 0.2, -0.2, 0.74, -0.74, then the slope and so on, if you're very bright. But how long do you think it would take to recognize the sine plotting that data on a graph?

It's a difference of orders of magnitude. Like in a B-movie scifi, you've been using only 10% of your brain (not really), imagine if we could access 100%, interesting things begin to happen.

I think most of us do know that visualization is powerful, because we can appreciate it when we work with it, for example in a live profiler.
Yet I've rarely seen people dumping data from programs into graphing software and I've rarely seen programmers that actually know the science of data visualization.

Visualizing program behaviour is even more important in the context of rendering engineers or any code that doesn't just either fail hard or work right.
We can easily implement algorithms that are wrong but doesn't produce a completely broken output. It might be just slower (i.e. to converge) than it needs to be, or more noisy, or just not quite "right" and cause our artists to try to adjust for our mistakes by authoring fixes in the art (this happens -all- the time) and so on.
And there are even situations where the output is completely broken, but it's just not obvious from looking at a tabular output, a great example for this would be in the structure of LCG random numbers.

This random number generator doesn't look good, but you won't tell from a table of its numbers...


- Good visualizations

The main objective of visualization is to be meaningful. That means choosing the right data to study a problem, and displaying it in the right projection (graph, scale, axes...).

The right data is the one that is interesting, it shows the features of our problem. What questions are we answering (purpose)? What data we need to display?

The right projection is the one that shows such features in an unbiased, perceptually linear way, and that makes different dimensions comparable and possibly orthogonal. How do we reveal the knowledge that data is hiding? Is it x or 1/x? Log(x)? Should we study the ratio between quantities or absolute difference and so on.

Information about both data and scale comes at first from domain expertise. A light (or sound) intensity probably should go on a logarithmic scale, maybe a dot product should be displayed as the angle between its vectors, many quantities have a physical interpretation and a perceptual interpretation or a geometrical one and so on.

But even more interestingly, information about data can come from the data itself, by exploration. In an interactive environment it's easy to just dump a lot of data to observe, notice certain patterns and refine the graphs and data acquisition to "zoom in" particular aspects. Interactivity is the key (as -always- in programming).


- Tools of the trade

When you delve a bit into visualization you'll find that there are two fairly distinct camps.

One is visualization of categorical data, often discrete, with the main goal of finding clusters and relationships. 
This is quite popular today because it can drive business analytics, operate on big data and in general make money (or pretty websites). Scatterplot matrices, parallel coordinate plots (very popular), Glyph plots (star plots) are some of the main tools.

Scatterplot, nifty to understand what dimensions are interesting in a many-dimensional dataset

The other camp is about visualization of continuos data, often in the context of scientific visualization, where we are interested in representing our quantities without distortion, in a way that the are perceptually linear.

This usually employs mostly position as a visual cue, thus 2d or 3d line/surface or point plots.
These become harder with the increase of dimensionality of our data as it's hard to go beyond three dimensions. Color intensity and "widgets" could be used to add a couple more dimensions to points in a 3d space but it's often easier to add dimensions by interactivity (i.e. slicing through the dataset by intersecting or projecting on a plane) instead.

CAVE, soon to be replaced by oculus rift
Both kinds of visualizations have applications to programming. For deterministic processes, like the output or evolution in time of algorithms and functions, we want to monitor some data and represent it in an objective, undistorted manner. We know what the data means and how it should work, and we want to check that everything goes according to what we think it should.  
But there are also times were we don't care about exact values but we seek for insight into processes of which we don't have exact mental models. This applies to all non-deterministic issues, networking, threading and so on, but also to many things that are deterministic in nature but have a complex behaviour, like memory hierarchy accesses and cache misses.


- Learn about perception caveats

Whatever your visualization is though, the first thing to be aware of is visual perception: not all visual cues are useful for quantitative analysis. 

Perceptual biases are a big problem, because as they are perceptual, we tend not to see them, just subconsciously we are drawn to some data points more than others when we should not.


Metacritic homepage has horrid bar graphs.
As numbers are bright and below a variable-size image,  games with longer images seem to have lower scores...

Beware  of color, one of the most abused, misunderstood tool for quantitative data. Color (hue) is extremely hard to get right, it's very subjective and it doesn't express well quantities nor relationships (what color is less than another), yet it's used everywhere.
Intensity and saturation are not great either, again very commonly used but often inferior to other hints like point size or stroke width.


From complexdiagrams


- Visualization of programs

Programs are these incredibly complicated projects we manage to carry forward, but if that's not challenging enough we really love working with them in the most complicated ways possible. 

So of course visualization is really limited. The only "mainstream" usage you will have probably encountered is in the form of bad graphs of data from static analysis. Dependences, modules, relationships and so on.

A dependency matrix in NDepend

Certainly if you have to see your program execution itself it -has- to be text. Watch windows, memory views with hex dumps and so on. Visual Studio, which is probably the best debugger IDE we have, is not visual at all nor allows for easy development of visualizations (it's even hard to grab data from memory in it).

We're programmers so it's not a huge deal to dump data to a file or peek memory [... my article], then we can visualize the output of our code with tools that are made for data. 
But an even more important tool is to use visualization directly of the behaviour of code, in runtime. This is really a form of tracing which most often is limited to what's known as "printf" debugging.

Tracing is immensely powerful as it tells us at a high level what our code is doing, as opposed to the detailed inspection of how the code is running that we can get from stepping in a debugger.
Unfortunately there is today basically no tool for graphical representation of program state in time, so you'll have to roll your own. Working on your own sourcecode it's easy enough to put some instrumentation to export data to a live graph and in my own experiments I don't use any library for this, just write the simplest possible ad-hoc code to suck the data out.

Ideally though it would be lovely to be able to instrument compiled code, it's definitely possible but much more of an hassle without the support of a debugger. Another alternative that sometimes I adopt is to just have an external application peek at regular interval into my target's process memory
It's simple enough but it captures data at a very low frequency so it's not always applicable, I use it most of the times not on programs running in realtime but as an live memory visualization while stepping through in a debugger.

Apple's recent Swift language seems a step into the right direction, and looks like it pulled some ideas from Bret Victor and Light Table.
Microsoft had a timid plugin for VisualStudio that did some very basic plotting that doesn't seem to be actively updated and another one for in-memory images, but what would be really needed is the ability to export data easily and in realtime as good visualizations are usually to be made ad-hoc for a specific problem.

Cybertune/Tsunami

If you want to delve deeper into program visualization there is a fair bit written about it by the academia, with also a few interesting conferences, but what's even more interesting to me is seeing it applied to one of the hardest coding problems: reverse engineering. 
It should perhaps not be surprising as reversers and hackers are very smart people, so it should be natural for them to use the best tools in their job.
It's quite amazing seeing how much one can understand with very little other information by just looking at visual fingerprints, data entropy and code execution patterns.
And again visualization is a process of exploration, it can highlight some patterns and anomalies to then delve in further with more visualizations or by using other tools.

Data entropy of an executable, graphed in hilbert order, shows signing keys locations.


- Bonus links

Visualization is a huge topic and it would be silly to try to teach everything it's needed in a post, but I wanted to give some pointers hoping to get some programmers interested. If you are, here some more links for further study. 
Note that most of what you'll find on the topic nowadays is either infovis and data-driven journalism (explaining phenomenons via understandable, pretty graphics) or big-data analytics. 
These are very interesting and I have included a few good examples below, but they are not usually what we seek, as domain experts we don't need to focus on aesthetics and communication, but on unbiased, clear quantitative data visualization. Be mindful of the difference.

- Addendum: a random sampling of stuff I do for work
All made either in Mathematica or Processing and they are all interactive, realtime.
Shader code performance metrics and deltas across versions 
Debugging an offline backer (raytracer) by exporting float data and visualizing as point clouds
Approximation versus ground truth of BRDF normalization
Approximation versus ground truth of area lights
BRDF projection on planes (reasoning about environment lighting, card lighting)

25 June, 2014

Oh envmap lighting, how do we get you wrong? Let me count the ways...

Environment map lighting via prefiltered cubemaps is very popular in realtime CG.

The basics are well known:
  1. Generate a cubemap of your environment radiance (a probe, even offline or in realtime).
  2. Blur it with a cosine hemisphere kernel for diffuse lighting (irradiance) and with a number of phong lobes of varying exponent for specular. The various convolutions for phong are stored in the mip chain of the cubemap, with rougher exponents placed in the coarser mips.
  3. At runtime we fetch the diffuse cube using the surface normal and the specular cube using the reflection vector, forcing the latter fetch to happen at a mip corresponding to the material roughness.
Many engines stop at that, but a few extensions emerged (somewhat) recently:
Especially the last extension allowed a huge leap in quality and applicability, it's so nifty it's worth explaining a second.

The problem with Cook-Torrance BRDFs is that they depend from three functions: a distribution function that depends on N.H, a shadowing function that depends on N.H, N.L and N.V and the Fresnel function that depends on N.V.

While we know we can somehow solve functions that depend on N.H by fetching a prefiltered cube in the reflection direction (not really the same, but the same different that there is between the Phong and Blinn specular models), if something depends on N.V it would add another dimension to the preintegrated solution (requiring an array of cubemaps) and we completely wouldn't know what to do with N.L as we don't have a single light vector in environment lighting.

The cleverness of the solution that was found can be explained by observing the BRDF and how its shape changes when manipulating the Fresnel and shadowing components.
You should notice that the BRDF shape, thus the filtering kernel on the environment map, is mostly determined by the distribution function, that we know how to tackle. The other two components don't change much of the shape but scale it and "shift" it away from the H vector. 

So we can imagine an approximation that integrates the distribution function with a preconvolved cubemap mip pyramid, and the other components are somehow relegated into a scaling component by preintegrating them against an all-white cubemap, ignoring specifically how the lighting is distributed. 
And this is the main extension we employ today, we correct the cubemap that has been preintegrated only with the distribution lobe with a (very clever) biasing factor.

All good, and works, but now, is all this -right-? Obviously not! I won't offer (just yet) solutions here but can you count the ways we're wrong?
  1. First and foremost the reflection vector is not the half-vector, obviously.
    • The preconvolved BRDF expresses a radially symmetric lobe around the reflection vector, but an half-vector BRDF is not radially symmetric at grazing angles (when H!=N), it becomes stretched.
    • It's also different from the its reflection-vector based one when R=H=N but there it can be adjusted with a simple constant roughness modification (just remember to do it!).
  2. As we said, Cook-Torrance is not based only on an half-vector lobe. 
    • We have a solution that works well but it's based only on a bias, and while that accounts for the biggest difference between using only the distribution and using the full CT formulation, it's not the only difference.
    • Fresnel and shadowing also "push" the BRDF lobe so it doesn't reach its peak value on the reflection direction.
  3. If we bake lighting from points close enough that perspective matters, then discarding position dependence is wrong. 
    • It's true that perceptually is hard for us to judge where lighting comes from when we see a specular highlight (good!) but for reflections of nearby objects the error can be easy to spot. 
    • We can employ warping as we mentioned, but then the preconvolution is warped as well.
    • If for example we warp the cubemap by considering it representing light from a box placed in the scene, what we should do is to trace the BRDF against the box and see how it projects onto it. That projection won't be a radially symmetric filtering kernel in most cases.
    • In the "box" localized environment map scenario the problem is closely related to texture card area lights.
  4. We disregard occlusions.
    • Any form of shadowing of the preconvolved enviroment lighting that just scales it down is wrong as occlusion should happen before prefiltering.
    • Still -DO- shadow environment map lighting somehow. A good way is to use screen-space (or voxel-traced) computed occlusion by casting a cone emanating from the reflection vector, even if that's done without considering roughness for the cone size, or somehow precomputing and baking some form of directional occlusion information.
    • Really this is still due to the fact that we use the envmap information at a point that is not the one from which it was baked.
    • Another good alternative to try to fix this issue is renormalization as shown by Call of Duty.
  5. We don't clip the specular lobe to the normal-oriented hemisphere
    1. So, even for purely radial-symmetric BRDFs around the reflection vector (Phong), in an environment without occlusion, the approximations are not correct.
    2. Not clipping is similar to the issues we have integrating area lights (where we should clip the area light when it dips below the surface horizon, but for the most part we do not)
    3. This is expected to have a Fresnel-like effect - we are messing up with the grazing angles.
    4. A possible correction would be to skew the reflection vector away from the edges of the hemisphere, and shrink it (fit it to the clipped lobe)
  6. We disregard surface normal variance.
    • Forcing a given miplevel (texCubeLod) is needed as mips in our case represent different lobes at different roughnesses, but that means we don't antialias that texture considering how normals change inside the footprint of a pixel (note: some HW gets that wrong even with regular texCube fetches)
    • The solution here is "simple" as it's related to the specular antialiasing we do by pushing normal variance into specular roughness.
    • But that line of thought, no matter the details, is also provably wrong (still -do- that). The problem is closesly related to the "roughness modification" solution for spherical area lights and it suffers from the same issue, the proper integral of the BRDF with a normal cone is flatter than what we get at any roughness on the original BRDF.
    • Also, the footprint of the normals won't be a cone with a circular base, and even what we get with the finite difference ddx/ddy approximation would be elliptical.
  7. Bonus: compression issues for cubemaps and dx9 hardware.
    • Older hardware couldn't properly do bilinear filtering across cubemap edges, thus leading to visibile artifacts that some corrected by making sure the edge texels were the same across faces.
    • What most don't consider though is that if we use a block-compression format on the cubemap (DXT, BCn and so on) there will be discontinuities between blocks which will make the edge texels different again. Compressors in these cases should be modified so the edge blocks share the same reference colors.
    • Adding borders is better.
    • These techniques are relevant also for hardware that does bilinear filter across cubemap edges, as that might be slower... Also, avoid using the very bottom mips...
I'll close with some links that might inspire further thinking:
#phyiscallybasedrenderingproblems #naturesucks

16 June, 2014

Bonus round: languages, metaprogramming and terseness

People always surprise me. I didn't expect a lot out of my latest post but instead it spawned many very interesting comments and discussion over twitter and a few Reddit threads.

I didn't want to talk about languages really, I did so already a lot in the past, I wanted to make a few silly examples to the point of why we code in C++ and what could be needed to move us out of it (and only us, gamedevs, system, low-level, AAA console guys, not in general the programming world which often already ditched C++ and sometimes is even perfectly ok with OO) but instead I got people being really passionate about languages and touting the merits of D, Rust, Julia (or even Nimrod and Vala).

Also to my surprise I didn't get any C++ related flame, nobody trying really to prove that C++ was the best possible given the constraints or arguing the virtues of OO and so on, it really seems most agree today and we're actually ready and waiting for the switch!

Anyhow, I wanted to write an addendum to the post because it's related to the humanistic POV I tried to make, talking about language design.

- Beauty and terseness

Some people started talking about meta-programming and in general expressiveness and terseness. I wanted to offer a perspective on how I see some language concepts in terms of what I do.

In theory, beauty in programming is simplicity and expressiveness, and terseness. To a degree programming itself can be seen as data compression, so the more powerful our statements are, the more they express, the more they compress, the better.
But obviously this analogy goes only so far, as we wouldn't consider programming in a LZ-compressed code representation to be beautiful, even if it would be truly be more terse (and it would be a form of meta-programming, even).

That's obviously because programming languages are not only made to be expressive, but also understandable by the meat bags that type them in, so there are trade-offs. And I think we have to be very cautious with them.

Take meta-programming for example, it allows truly beautiful constructions, the ability of extending your language semantics and adapting it to your domain, all the way to embedded DSLs and the infamous homoiconicity dear to lispers. 
But as a side effect, the most you dabble in that, the more your language statements lose meaning in isolation (to the lisp extreme where there is almost no syntax to carry any meaning), and that's not great.

There might be some contexts where a team can accept to build upon a particular framework of interpretation of statements, and they get trained in it and know that in this context a given statement has a given meaning.
To a degree we all need to get used to a codebase before really understanding what's going on, but it is a very hard trade the one that adds burden to the mental model of what things mean.

For gamedev in particular is quite important not only that A = B/C means A = B/C, but also that it is executed in a fixed way. Perhaps certain times we overemphasize the need of control (and for example often have to debate to persuade people that GC isn't evil, lack of control over heap allocation is) because of a given cultural background, but undeniably it does exist.

[Small rant] Certainly I would not be happy if B/C meant something silly like concatenating strings. I could possibly even tolerate "+" for that because it's so common it is a natural semantic, maybe stretching it even "|" or "&". But "/" would be completely fucked up. Unless you're a Boost programmer and are really furiously masturbating on the thought of how pretty is to use "/" to concatenate paths because directory dividers are slashes and you feel so damn smart.

That's why most teams won't accept metaprogramming in general and will allow C++ templates only as generics, for collections. And will allow operator overloading only for numeric types and basic operations.
...And why don't like references if they are non-const (the argument is that a mutable reference parameter to a function can change the value of a variable and that change is not syntactically highlighted at the call-site, a better option would be to have an "out" annotation like C# or HLSL). ...And don't like anything that adds complexity to the resolution rules of C++ calls, or exceptions, or the auto-generated operators of C++ classes, and should thus stay away also from R-value references.

- Meatbags

For humans certain constraints are good, they are actually liberating. Knowing exactly what things will do allows me to reason about them more easily than languages that require lots of context and mental models to interpret statements. That's why going back to C is often as fun as discovering python for the first time.

Now of course YMMV, and there are situations where truly we can tolerate more magic. I love dabbling with Mathematica, even if most of the times I don't exactly know how the interpreter will chain rules to compute something, it works and even when it doesn't I can iterate quickly and try workarounds until it does work.
Sometimes I really need to know more and there is when you open a can of worms, but for that kind of work it's fine, it's prototyping, it's exploratory programming, it's fun and empowering and productive. And I'm fine not knowing and just kicking stuff, clicking buttons until things work in these contexts, not everybody has to know how things work all the way to the metal and definitely not all the time, there are places where we should just take a step back...
But I wouldn't write a AAA console game engine that way. I need to understand, be able to have a precise simple mental model that is "standard" and understood by everybody on a project, even new hires. C++ is already too complex for this to happen and that's one of the big reasons we "subset" it into a manageable variant enforced by standard and linters. 

Abstractions are powerful but we should be aware of their mental cost, and maybe counter it with simple implementations and great tools (e.g. not make templates impossible to debug like in C++ are...) so it doesn't feel like you're digging in a compiler, when they fail.

Not that all language refinements impose a burden, so there can't be a more expressive than C language that is still similarly easy to understand, but many language decisions come with a tradeoff, and I find ones that loosen the relationship between syntax and semantics particularly taxing.

As the infamous Gang of Four wrote (and I feel dirty citing a book that I so adverse): "highly parameterized software is harder to understand and build than more static software".

That's why I advocate for increased productivity to seek interactivity and zero-iteration times, live-coding and live-inspection over most other language features these days.

And before going to metaprogramming I'd advocate to seek solutions (if needed) in better type systems. C++ functors are just lambdas and first-class functions done wrong, parametric polymorphism should be bounded, auto_ptr is a way to express linear types and so on... Bringing functionality into the language is often better than having a generic language that ca be extended in custom ways. Better for the meatbags and for the machine (tools, compilers and so on)

That said, every tool is just that, a tool. Even when I diss OOP it's not that per se having OO is evil, really a tool is a tool, the evil starts when people reason in certain ways and code in certain ways. 
Sometimes also the implementation of a given tool is particularly, objectively broken. If you only know metaprogramming from C++ templates, that were just an ignorant attempt at generics went very wrong (and that is still today not patched, concepts were rejected and I don't trust anyways them to be implemented in a sane way), then you might be overly pessimistic.

But with great power comes often great complexity to really know what's going on, and sane constraints are often an undervalued tool, we often assume that less constraints will be a productivity win and that's not true at all.

- Extra marks: a concrete example

When I saw Boost::Geometry I was so enraged I wanted to blog about it, but it's really so hyperbolically idiotic that I decided to take the high road and ignore it - Non ragioniam di lor, ma guarda e passa.

As an example I'll post here a much more reasonable use case someone showed me in a discussion, I have actually no qualms with this code, it's not even metaprogramming (just parametric types and unorthodox operator overloading) and could be appropriate in certain contexts, so it's useful to show some tradeoffs.

va << 10, 10, 255, 0, 0, 255;

Can you guess what's that? I can't, so I would go and look at the declaration of va for guidance.

vertex_array < attrib < GLfloat, 2 >, attrib < GLubyte, 4 > > va;

Ok so now it's clear, right? The code is taking numbers and packing into an interleaved buffer for rendering. I can also imagine how it's implemented, but not with certainty, I'd have to check. The full code snippet was:

vertex_array < attrib < GLfloat, 2 >, attrib < GLubyte, 4 > > va;

va << 10, 10, 255, 0, 0, 255; // add vertex with attributes (10, 10) and (255, 0, 0, 255)
// ...
va.draw(GL_TRIANGLE_STRIP);

This is quite tricky to implement in a simpler C-style C++ also because it hits certain deficiencies of C, the unsafe variadic functions and the lack of array literals. 
Let's try, one possibility is:

VertexType vtype = {GLfloat, 2, GLubyte, 3};
void *va = MakeVertexArray(vtype);

AddVertexData(&va, vtype, 10, 10, 255, 0, 0, 255, END);
Draw(va, vtype);

But that's still quite a bit magical at the call-site, not really any better. Can we improve? What about:

VertexType vtype = {GLfloat, 2, GLubyte, 3};
void *va = MakeVertexArray(vtype);

va = AddFloatVertexData(va, 10, 10);
va = AddByteVertexData(va, 0, 0, 255);
Draw(va);

or better (as chances are that you want to pass vertex data as arrays here and there):

VertexType vtype = {GLfloat, 2, GLubyte, 3};
void *va = MakeVertexArray(vtype);

float vertexPos[] = {10, 10};
byte vertexColor[] = {0, 0, 255};
va = AddVertexData(va, vertexPos, array_size(vertexPos));
va = AddVertexData(va, vertexColor, array_size(vertexColor));
Draw(va);

And that's basically plain old ugly C! How does this fare?

The code is obviously more verbose, true. But it also tells exactly what's going on without the need of -any- global information, in fact we're hardly using types at all. We don't have to look around or to add comments, and we can exactly imagine from the call site the logic behind the implementation. 
It's not "neat" at all, but it's not "magical" anymore. It's also much more "grep-able" which is a great added bonus.

And now try to imagine the implementation of both options. How much simpler and smaller the C version will be? We gained clarity both at call-site and in the implementation using a much less "powerful" paradigm!

Other tradeoffs could be made, templates without the overloading would already be more explicit, or we could use a fixed array class for example to pass data safely, but the C-style version scores very well in terms of lines of code versus computation (actual work done, bits changed doing useful stuff) and locality of semantics (versus having to know the environment to understand what the code does).

An objection could be that the templated and overloaded version is faster, because it knows statically the sizes of the arrays and the types and so on, but it's quite moot. The -only- reason the template knows it's really because it's inline, and it -must- be. The C-style version offers the option of being inline for performance, or not, if you don't need and want the bloat.

It's true that the fancy typed C++ version is more safe, and it is equally true that such safety could be achieved with a better type system. Not -all- language innovations carry a burden on the mental model of program execution.

Already in C99 for example you could use variable length arrays and compound literals to somewhat ameliorate the situation, but really a simple solution that would go a long way would be array literals and the support of knowing array sizes passed to functions (an optional implicit extra parameter).

Note that I wrote this in C++, but it's not about C++, even in metaprogramming environments that are MUCH better than C++, like Lisp with hygienic macros, metaprogramming should be used with caution.
In Lisp it's easy to introduce all kind of new constructs that look nifty and shorten the code, but each time you add one it's one more foreign syntax that is local to a context and people have to learn and recognize. Not to be abused.
Also, this is valid for any abstraction, abstraction always is a matter of trade-offs. We sometimes forget this, and start "loving" the "beauty" of generic code even if it's not actually the best choice for a problem.

14 June, 2014

Where is my C++ replacement?

Nowadays I can safely say the OO fad, at least for the slice of the programming world I deal with, is over.
Not that we're not using classes anymore (and why should we not), but most good studios don't think OOP and thanks to a few high-profile programmers who spoke up (more amusing reads in the "The rest and the C++ box of chocolate" section here) people are thinking about what programs do (transform data) instead of how to create hierarchies.
I can't remember last time someone dared to ask about Design Patterns at a coding interview (or anywhere). Good.

Better yet, not only OOP has been under attack, but C++ as well. Metaprogramming via C++ templates? Not cool. Boost? Laughed at. I wouldn't be surprised if Alexandrescu even thought policies (via C++ templates) are crazy...
And not only we subset C++ into a manageable, almost-sane language (via coding standards and linters), but more and more people are even going back to a C-like C++ style.

So it begs the question. If we're so unhappy about OO and even recognize many of the faults of C++, where is the replacement? Why are we still all using C++?
I wrote a big, followed post on programming languages back in 2011 and I haven't updated it yet because I don't feel too much has changed...

Addendum: I didn't really mean to discuss language features, just success and adoption in my field and some of the reasons I believe are behind it. But there was something I wanted to add when it comes to languages and I wrote it here

- Engineers should know about marketing

And people. And entrepreneurship. Really. I'll be writing some of the same considerations I've expressed in my last post about graphics APIs, but it's not a surprise, because they are universal.

So, let's do it again. How close are "C++ replacements" of being viable for us? What do we want from a new language?
- Solve pain (big returns). Oh, a new multi-paradigm, procedural, object-oriented, functional, generic language with type inference and dependent types? Cool! How does it make me happier? How does it solve my problems?
- Don't create pain (low investment). Legacy is a wall for the adoption of any new language. How easy is your new language to integrate in my workflow? Does it work with my other languages? Tools? IDEs?

Now, armed again with this obvious metric, let's see how some languages fare from the perspective of rendering/AAA videogames...

- D language

D should be the most obvious candidate as a C++ replacement. D is an attempt at a C++ "done right", learning from C++ mistakes, complexity issues, bad defaults and so on while keeping the feeling of a "systems" language, C-like, compiled.
It's not a "high-performance" language (in the sense of numerical HPC, even if it does, at least, support 128bit SIMD as part of the -standard- library, so in that respect it's an evolution) but, like C++, is relatively low-overhead on top of C.

So why doesn't it fly (at least yet)? Well, in my opinion the problem is that nowadays "fixing" C++ is not quite enough to switch. We already "fixed" C++ largely by writing sane libraries, by having great compilers and IDEs, detecting issues with linters and so on.

Yes, it would be great to have a language without so many pitfalls, but we worked around most of them. What does D do that our own "fixed" C++ subsets don't? 
Garbage Collection, which is important for modularity but "systems" programmers hate (mostly out of prejudice and ignorance, really). Better templates to a community which is quite (rightfully) scared of meta-programming.

It doesn't even make adoption too hard, there are a number of compilers out there, even a LLVM based one (which guarantees good platform support also for the future), Visual Studio integration, it can natively call C functions with no overhead (but not C++ in general, even if it's an understandable decision).

It's good. But not compelling (enough) reason to switch. It quite clearly aims to be used for -any- code that C++ is used for by being prettier. That's like trying to replace EBay with a new site that has the same business plan as EBay but with a better interface (and no marketing)...

It almost seems to be made thinking that you can do something better and then people will flock to it because well, it's better. But things almost never go this way. Successful languages solve a need for some people and they often start with a focused niche of adopters and then if they're lucky they expand 
Java, JavaScript, Perl, Python, all started in such a way. Some languages do arguably succeeded at being "just better" (or anyhow started from scratch to replace some others), but these they had huge groups pushing them behind them, like Microsoft did with C#.

- Rust

Rust departs from C++ more than D and many people are looking at it with some hope it could be the systems language of the future. It's in its early stages of development still (v 0.10 as of today) but it starts well by having a big bold target: concurrency and safety, with low overhead via an ingenious type system.

The latter attracted the interest of gamedevs (even if today, in its early implementation, Rust is not super fast), as while most type-safe languages have to rely on Garbage Collection, Rust does without, employing a more complex static type system instead.

It's very interesting but for the time being and the foreseeable future for us (game/rendering programmers) Rust's aim is not so enticing.

We solved concurrency with a bunch of big parallel_for over large data arrays and some dependencies between a bunch of jobs carrying such loops.
We don't share data, we process arrays with very explicit flows and we know how to do this quite well already. Also, this organization is quite important for performance, a bunch of incoherent jobs would not use resources quite as well.

If we needed something "more" for less predictable computations (AI... gameplay...) we could employ messages (actors), but that kind of async computing is much slower. C++ doesn't make any of this trivial (of course!) but once it's up and running we don't have much to fear (that's also why fancy models like transactional shared memory are, I think, completely irrelevant to us).

Safety could be a bit more interesting as safer type system could save us some time, if they don't end up in increased complexity. But, even if it's true that sometimes we have to chase horrific bugs, considering that we're working on the least safe language in the world, I'd say we're not doing badly.
Or maybe we are, but just think about all the times you considered a big refactoring to make the code more safe, and didn't manage to justify it well enough in terms of returns... And that's a much less ambitious thing than changing language!

I'd like to maintain a database of bugs (time spent, bug category and so on) in our industry to data-mine, many people are "scared" of allocation and memory related one but to be honest I wonder how much impact they have, armed with a good debugging allocator (logging, guard pages, pattern and canary checking and so on).

Maybe certain games do care more about safety (e.g. online servers) and maybe I'm biased being a rendering engineer, our code has (should have) simple data flows and really hard bugs are usually related to hardware (e.g. synchronization with GPU).
Not that we would not love to have Rust's benefits, I simply don't think though they are important enough to pay the price of a new language. 

Nonetheless, it's a very interesting one to follow though, and it's still in its early stages, so I might change my ideas.

- Golang

Go is somehow similar to Rust at least as far as they are both C++ replacements born "out of the web" (even if Go was thought mostly for server-side stuff while Rust's first application aims to be a browser), but it could be a bit more interesting because of one of its objectives.

In many ways it's not a great language (especially right now) but it is promising.

On one hand it's quite a bit simpler, with a much more familiar type system (also due to the fact that it doesn't try to enforce memory safety without a GC), so it requires a smaller investment, not quite as ground-breaking, but very practical.

On the other hand it has at least one very enticing core design feature for us: it's built for fast iteration, explicitly, and that is, finally, something we do really strongly care about in our day to day work!

We go to great lengths to avoid long iteration times, and C++ is so terrible in that respect that we even sacrifice performance with scripting or worse with "data-driven" logic (not data-driven programming, but logic, that's to say with data that doesn't express a Turing-complete language but yet expresses some of the logic that we need, usually requiring some very badly written interpreter of sorts).

It's also backed by a huge corporation, so it solved the "early adopters" issue easily.

Yet, as it stands now there is still too much friction for us to consider it: it doesn't quite work in our environments, it has a slow C interop and moreover most of its language features are not too relevant for us to a degree where just using C would be not much different in terms of expressiveness.

It's a nifty, simple language that has a strong backing and will probably succeed, but hardly for us, even if in principle it starts going somewhere we really need languages to go...

- Irrelevance...

That's a big problem, a substantial reason about why I think we didn't find a C++ replacement.

It's not that all new languages don't understand what's needed for success, but most languages that do understand that are just interested in other fields. 

Web really won. Python, Javascript (and the many languages built on top of it), Go, Rust, Ruby, Java (and the many languages built on top of the JVM).

If you look around the key is not to find a C++ replacement, that already widely happened in many performance critical fields. It's to find our C++ replacement for our field that doesn't see anymore much language activity.

Application languages also left us behind, C# is great as a language, clean, advanced, fast iteration, modern support for tools (reflection, code generation, annotations...) and the one that flirted with games most closely... 
But it just seems that nobody is -really- concerned about making a static compiler for (most of) it that has the performance guarantees (contracts on stack, value-passing, inlining...) and the (zero cost) interoperability we'd like for it to really fly.

High-performance computing does many of the same things we do, going wide with parallel instruction (SIMD), threads, GPUs. But they are not concerned with meshing with C/C++ almost at all, they are not low-overhead systems languages. 
When you have to process arrays of thousand of elements, even the cost of interpreting each operation that will then be executed wide, is not important, so HPC languages tend to be much higher-level that we'd like.

Also, even when they are well integrated with C (i.e. C++AMP and OpenMP or the excellent ISPC, Julia is also worth a look), HPC takes care of small computational kernels which we know well how to code even all the way down to assembly, we're not too concerned about that.
Maybe in the future this will shift if we see an actual need of targeting heterogeneous architectures with a single code base, but right now that seems not too important.

Maybe mobile app development will save us, the irony. Not that I'm advocating Swift right now but it's certainly interesting that we see much more language dynamism there.

- In a perfect world...

How could a language really please us? What should the next C++ even look like to make us happy? C++ was a small set of macros on top of C that added a feature that people at the time wanted, OO. What's the killer feature for us, today?

Nice is not enough. D is nice. Rust has lots of nice features and we can debate a lot about nice language features we'd like to have, and things that should be fixed, and I do enjoy that and I do love languages.

But, I don't think that's how change happens, it doesn't happen because something is simply better. Not even if it's much better, not in big fields with lots of legacy (and not if "better" doesn't necessarily translate to making lots more money as well or spending lots less).

As engineers we sometimes tend to underestimate just how much something has to be conveniente in order to be adopted. It's not only the technical plane (not at all). It's not only, the tools, the code legacy, the documentation.
When all these are done, there is still the community to take care, the education, what your programmers know and what programmers you want to hire know... And when you have all these in line you still need to overcome people laziness, biases, irrationality (all defects I partake in myself). 
And even if all is there you simply might not have the resources to pay for the cost, even if the investment is positive in the long run, or, which is actually harder, be able to prove that such investment will make more money!

It's a mountain. That's why C++ survives for us.

Back to the beginning, cost/return, how can we find a disruptive change in that equation? I think for us a new language can succeed only if it fulfills two requirements.

One is to be very low-cost, preferably "free", like C++ was (C with Classes). Compiling down to C++ is a good option to have, makes us feel safe. That's why C++ superset and subset, are already very popular today: we lint, we parse, we code-generate... reflection, static-checking, enforcing of language subsets, extensions...

The other is to be so compelling for our use cases, that we can't do without. And in our industry that means I think something that saves order of magnitudes in effort, time and money.
We're good with performance already even if we have to sweat and we don't have standard vectors or good standard libraries and so on. 
We don't care (IMHO) enough about safety, that we are becoming better at achieving with tools and static checkers. Not concurrency, that we solved. Not even simplicity, because we can "simplify" already our work by ignoring complex stuff... But productivity, that is my bet.

- Speed of light

If I have to point at what is most needed for productivity, I'd say interactivity. Interactive visualization, manipulation, REPLs, exploratory programming, live-coding.

That's so badly needed in our industry that we often just pay the cost of integrating Lua (or craft other scripts), but that can work only in certain parts of the codebase...

Why did Lua succeed? It's a scripting language! Why aren't we hot-swapping D instead? We sacrificed runtime performance, to what? To both productivity and cost!
Lua is easy(-ish... with some modifications...) to integrate, maybe other languages could be as easy but crucially Lua being a portable interpreter guarantees it will work on any platform that supports C (or we can fix it to work, easily). And Lua is productive, allows interactive coding, it's even better than hot-reloading C++ in terms of iteration. 

Among the languages that are "safe", guaranteed to work with all our platforms (even in the futre) and that interop with C easily, and that allow live-coding, Lua is the fastest, so we picked it. Not for any language feature (actually the language itself is not really ideal and it heap-allocates a lot). It could have been gwbasic I think for what we cared about the syntax...

A language that meshes well with C/C++ codebases, that we can trust in its availability on all platforms (the option of a C/C++ codegen is a way to ensure that) but that offers fast iteration will succeed in our field. 
In fact I would gladly give up any of the C++11 features (even the few decent ones) for modules (preferably dynamic, but even static would increase code malleability), but of course the committee is a sad joke today so, they rather just add complexity to one of the most arcane languages out there.

I really think iteration time is the key, and approaching interactivity is a game changer. I would take any language, regardless of the details, if it's interactive. In fact I do, as a rendering engineer, I love shader programming even if shader languages are not great and their tools are not great, just because shaders are trivial to hot-swap.
It's such a disruptive advantage, and it's really the only thing that I can think of that is compelling enough for us to pay the price of a new language.

My best hope nowadays is LLVM, which seems it's more and more poised to be the common substrate for systems programming across platforms (windows is still not the best target though, but work is in progress). 
That could enable low-cost adoption of new languages, well integrated with C/C++ compiler and libraries, the same as JS is now the web common substrate for a lot of languages (or JVM is for server stuff).