Search this blog

Loading...

13 August, 2016

Activision @ Siggraph 2016

Siggraph 2016 recently ended, as always it was inspiring and lots of great discussions were had. Activision presence was quite strong this year after we shipped Black Ops 3.

If you missed any of our presentations, this post might help. I'll try to keep this updated with links relative to our presentations, as soon as they come online.

Note: for convenience, I've written a summary of the techniques in the text below but keep in mind that, as always, this blog is truly made of personal opinions and observations - which come from my limited, R&D centric point of view.


Call of Duty has always been a lightmapped title, but lightmaps come with a quite large set of issues: large baking times, complexity in representing runtime materials and effects, lighting discontinuities with dynamic objects and the inability of "kit-bash" geometric detail (compenetrating meshes).

Static and dynamic objects, particles, volumetrics:
all lit with the same illumination data!

It's incredible that after all this time, there still isn't in real-time rendering much research on ways of solving the problem of baked irradiance. Naive lightmaps are not enough to generate an image that does look coherent, without leaks and discontinuities.

This new development at Treyarch tackles all these issues at once with a new runtime representation of baked irradiance that works seamlessly with deferred shading and is tightly coupled with prefiltered irradiance cubemaps and new, state of the art, heuristics for parallax-corrected reflections and a baking system that cut down iteration times substantially for artists without having to resort to renderfarms.

Baking via prefiltered cubemaps

Now all our art production happens in a fully WYSIWYG editor, and all the lighting is unified, regardless of the object type (dynamic, static, skinned, particles, volumetrics...).
This is achieved by employing hardware-filtered volumetric textures as the only representation of baked irradiance. 

WYSIWYG editor with tools to quickly place volumes in the scene
Artists place irradiance volumes quickly in a level, and author clipping convex volumes to avoid light leaks (note that these have to be authored anyways for reflection probes, so it's not really extra work, and our editor supports very fast workflows for placing volumes and planes), and the authored volumes are usually robust to iterations in the level geometry.


Kevin Myers presented in the "dark hides the light" session our new compression technology for baked shadowmaps. 
Call of Duty had a system for caching shadowmaps since Ghosts, as the new generation of consoles saw a bigger increase in memory than in computation, caching techniques have seen a resurgence. 



This work is a quite radical improvement of our shadowmap caching technology, and allows to fully pre-bake a shadowmaps for entire levels, achieving thousand-fold compression ratios.

In comparison with precomputed voxelized shadows this technique allows for easily recovering the shadowmap depth, which makes it easier to integrate with other effects (i.e. volumetric lighting). It's also fast to traverse. The closest relative to SSTs are the Compressed Multiresolution Hierarchies of Scandolo et al., but our solution was developed independently in parallel, so the data encoding employed is not the same.

To my knowledge the first use of compressed shadowmaps in game production. It also enabled savings in the g-buffer, as we can use the SST to quickly shadow far objects, instead of having to bake a lightmap space occlusion map.



This is the continuation of the subdivision surface research started by Wade on Call of Duty Ghosts, (which to my knowledge is the first videogame to make extensive use of Catmull-Clark subdivision surfaces in real-time).

It's a quite neat technique, which sidesteps a limitation in current hardware tessellation pipelines by passing a variable number of control points to the hull shader via L2 read/writes.

During the development of this technique, Wade also made a nifty thread tracer for NVidia GPUs, which helped debugging issues with work being (erroneously) serialized on the GPU.

This is the only presentation at Siggraph 2016 that is not directly linked with a shipped Call of Duty title.


Natasha and Wade gave a compelling presentation at the open problems in computer graphics course, surveying artists in many companies, even going outside the videogame companies circle we work in.

This is one of my favorite courses at Siggraph, and the only presentation from Activision that I haven't reviewed beforehand, so it was truly interesting for me to watch it live.


Jorge Jimenez presented a new version of his SMAA antialiasing technique, greatly improving both performance and quality (sharpness and stability) with a plethora of tweaks and innovations. The slides also summarize relevant previous published techniques and their image quality tradeoffs.

One of MANY improvements presented

In my opinion, with Jorge's Filmic SMAA, the quest for antialiasing techniques is largely over, and we can say that temporal reprojection has "won".

Not only temporal reprojection often achieves better quality than MSAA when using comparable performance budgets, but it's being easier to integrate into deferred renderers and nowadays it's becoming unavoidable anyways because so many other effects can benefit from the ability to perform temporal supersampling (e.g. shadows, reflections, ambient occlusion...).

Hybrid techniques are still very interesting, and there are probably still improvements possible (and probably Jorge will come yet again on stage in the future to show something amazing in that space), but in general the interest on MSAA reconstruction techniques for edge antialiasing has diminished from being a must-have and a pain point for deferred rendering, to a much less important solution.

MSAA is still a great feature to have in hardware though as it allows to subsample or supersample certain effects and render passes (e.g. particles), but it should be more thought as a way to do mixed resolution than strictly for edge antialiasing.


Jorge, Xian, Adrian and myself worked on extending rendering ambient occlusion state of the art by crafting techniques that are based on modeling the (ray traced) ground truth solution for diffuse and specular occlusion.

Fast, accurate ambient occlusion...

We derived closed-form analytic solutions when possible, and when such models could not be found for ampler generalizations of the problems at hand, we extended the analytic solutions by fitting "residual" functions to the ground truth data, or by employing look-up tables.

...and specular occlusion.

GTAO has already been used in production, as a drop-in replacement of the previous state-of-the-art technique we were employing (HemiAO), yielding better image quality (actually, even better than HBAO, which is a very popular high-quality solution) in the same performance budget (0.5ms).

06 August, 2016

The real-time rendering continuum: a taxonomy

What is forward? What is deferred? Deferred shading? Lighting? Inferred? Texture-space? Forward "+"? When to use what? The taxonomy of real-time rendering pipelines is becoming quite complex, and understanding what can be an "optimal" choice is increasingly hard.

- Forward

So, let's start simple. What do we need to do, in a contemporary real-time rendering system, to draw a mesh? Let's say, something along these lines:


This diagram illustrates schematically what could be going on in a "forward" rendering shader. "Forward" here really just means that most of the computation that goes from geometry to final pixel color happens in a single vertex/pixel shader pair. 
We might update in separate steps some resources the shader uses, like shadow maps, reflection maps and so on, but the main steps, from attribute interpolation to texturing, to shading with analytic lights, happen in a single shader.

From there on, the various flavors of forward rendering only deal with different ways of culling and specializing computation, but the shading pipeline remains the same!

- Culling

Classical multi-pass forward binds lights to meshes one at a time, drawing a mesh multiple times to accumulate pixel radiance on the screen. Lights are bound to a pass as shader constants, and as you typically have only a few light types, you can generate ad-hoc shaders that efficiently deal with each. Specialization is easy, but you pay a price to the multiple passes, especially if you have a lot of overlapping lights and decals.

Single-pass forward is an improvement that foregoes the waste of multi-pass shading (bandwidth, repeated computations between passes and multiple draws) by either using a dynamic branching "uber-shader" capable of handling all the possible lights assigned to an object, or by generating static shader permutations to handle exactly what a given object needs.

The latter can easily lead to an explosion in the number of shaders needed, as now we don't need just one per light type, but per permutation of types and number of lights.
The advantage is that it can be much more efficient, especially if one is willing to split a mesh to exactly divide the triangles which need a specific technique (e.g. triangles with one light from ones that need two or more, triangles that need to blend texture layers, to perform other special effects).

This is Advanced Warfare: ~20k shaders per levels and
aggressive mesh splitting generating tons of draw calls
Forward+ is nothing more than a change in the way some of the data is passed to a dynamic branching style single-pass forward renderer: instead of binding lights per mesh (draw) as shader constants, they are stored in some kind of spatial subdivision structure that the shader can easily access. Typically, screen tiles or frustum voxels ("clustered"), but other structures can be employed as well.

At first, it might sound like a terrible idea. It has all the drawbacks of a dynamic branching uber shader (lots of complexity, no ability to specialize the shaders, register usage bound by the most expensive path in the shader) but with the added penalty of divergent branches (as the lights are not constant in the shader). So, why would you do it?

Light culling in a conventional forward pipeline can be quite effective for static lights, or lights that follow prescribed path, as we can carve geometry influenced by each and specialize. But what if we have lots of dynamic lights? Or lots of small lights? 
At a given point, carving geometry becomes either inefficient (too many small draws) or impossible. In these situations, Forward+ starts to become attractive, especially if one is able to avoid branch divergence by processing lights one at a time.

In the end, though, it's just culling and specialization. How to assign lights to rendering entities. How to avoid having dynamic branching, generic shaders that create inefficiencies.

Once one thinks in these terms, it's easy to see that other configurations could be possible, for example, one might think of assigning lights to mesh chunks and dynamically grouping them into draws, following the ideas of Ubisoft's and Graham Wihlidal's mesh processing pipelines. Or one could assign lights to a per-object grid, or a world space BSP, and so on.

- Splitting the pipeline

Let's look again at the diagram I drew:


Quite literally we can take this "forward shading" pipeline and cut it an arbitrary point, creating two shader passes from it. This is a "deferred" rendering system, some of the computation is deferred to a second pass, and albeit the most employed system (deferred shading) splits material data from lighting/BRDF evaluation, we almost have today a deferred technique for any reasonable choice of splitting point.

Of course, after we do the split, we'll need the two resulting passes to communicate. The pass that is attached to the geometry (object) needs to communicate some data to the pass that is attached to the pixel output. This data is stored by the first pass in a geometry buffer (g-buffer!) and read in the second. 
Typically, we store g-buffers in screen-space, but other choices are possible.

So, why would we want to do such a split? At first, it seems very odd. Instead of having a single pass that does all computation in registers, locally and fast, we force some of the data to be written all the way out to GPU memory, uncompressed, and then read again from memory in the second pass. Why?

Well, the reasons are exactly the same as -every time- we have to decide if to split or not any GPU computation, be it a post-effect, a linear algebra routine or in our case, mesh rendering, the potential advantages are always the same:
  1. Specialization. We might be able to avoid a dynamic branching uber-shader by stopping the computation at a point and launching a number of specialized routines for the second part.
  2. Inter-thread data access. We might need to reuse the data we're writing out. Or access it in patterns that are not possible with the very limited inter-thread communication the GPU allows (and pixel shaders don't/can't give control over what gets packed in a wave, nor have the concept of thread groups! *)
  3. Modifying data. We might want to inject other computation that changes some of the data before launching the second pass.
  4. Re-packing computation. We might want to launch the second pass using a different topology for our waves.
* Note: it would be interesting to think how a "deferred" system could take advantage of hardware tile-based rendering architectures if one could program passes to operate on each tile. Ironically today on tile-based deferred GPUs, deferred shading is usually not great, because tile architectures are made to avoid reads/writes to a slow main memory, and by design don't have problems with overshading in forward rendering...

- Decision tree

Adding a split point in our pipeline choices makes things incredibly complex, I'd say out of the reach of rendering engineers just manually doing optimal choices. 
We're not dealing anymore just with dynamic versus static lights, or culling granularity, but on how to balance a GPU between ALU, memory, shader resources and different organizations of computation. 

It's very hard to evaluate all these choices in parallel also because typically prototypes won't be really as optimized as possible for any given one, and optimization can change the performance landscape radically. 
Also, these choices are not local, but the can change how you pack and access data in the entire rendering system. What effects you can easily support, how much material variation you can easily support, how to bake precomputed data, what space you have to inject async computation and so on.

Since we started working on "next-gen" consoles, with a heavy emphasis on compute, I've been interested in automatic tuning, something that is quite common in scientific computing, but not at all yet for real-time rendering.

But even autotuning can only realistically be applied when the problem specification is quite rigid and it's unlikely to be successful when we can change the way we structure all the data and effects in a rendering system, to fit a given choice of pipeline (which doesn't mean we can't do better in terms of our abilities to explore pipeline choices...).

- Deferred versus Forward?

So how can we decide what to use when? Well, some rule of thumbs are possible to devise, looking at the data, the computation we wish to operate, and making sure we don't do anything too unreasonable for a given GPU architecture.

The first bound to consider is just the data bandwidth. How much can I read and write, without being bound by reading and writing? Or to be more precise, how much computation do I have to have in order for the memory operations to not be a big bottleneck? For the latency to be well-hidden? 

As an example, right now, on ps4, it's entirely reasonable to do a deferred shading system writing the typical attributes for GGX shading, at 1080p **, with a typical texture layer compositing system and having the g-buffer pass be mostly ALU-bound. 
The same might not be true for a different system at a different resolution, but right now it works, and some titles shipped with some fairly crazy "fat" g-buffers without problems.

Black Ops 3 is a tiled deferred renderer

** Note: without MSAA. In my view, MSAA for geometry antialiasing is not fundamental anymore; It's still a great technique for supersampling/subsampling, but we need temporal antialiasing (Filmic SMAA is great, and ideally you could do both) not only because it can be faster for comparable quality, but because we want to temporally filter all kind of shading effects! 
I'm also not addressing in this the problem of transparencies for a deferred renderer because it's easy to deal with them in F+, sharing the same light lists and most of the shader (just by "connecting" the ends that were cut in the deferred ones)

The second thing to consider is data access. Do you -need to- access lots of data that is parametrized on the surface (especially, vertices)? E.g. The Order's "fat" lightmaps? Then probably decompressing it and pushing it through screen-space buffers is not the best idea. 
Black Ops 3 for example bakes lighting in volume textures and static occlusion in a compressed shadow-map, while Advanced Warfare uses classic uv-mapped lightmaps and occlusion maps.

On the other hand, do you need to access surface data in screen-space effects? Ambient and specular occlusion, reflections (note for example that The Order doesn't do any of these screen-space effects)... Or modify surface data in screen-space, e.g. via mesh-based decals ***? Then you have to write a g-buffer anyways, the only question is when!

*** Note: Nowadays projected or "volumetric" decals are quite popular, and these can be culled in tiles/clusters just like lights, so they work in -any- rendering pipeline. They have their drawbacks though as they can't just precisely follow a surface. Maybe an idea could be to use small volume textures to map projected decals UVs and to mask their area of influence?

The Order 1886 uses F+ and very advanced lightmapping,
foregoing any screen-space shading technique

- Deferred splits and computation

Often, either memory bandwidth makes the choice "easy" for a given platform, or the preference for certain rendering features do (complex lightmaps, mesh decals, screen space effects...). But if they don't then we're left with performance: how to best structure computation.

One big advantage of deferred shading is just in the ability to dispatch specialized shaders per screen region.

The choice of what to specialize and how many passes to for a tile is entirely non-trivial, but at least is possible and does not result in an incredible number of permutations, like in single-pass forward, both because we resolved all the material layering in the g-buffer pass, thus we don't need to specialize both over lights and material features, and because doing multiple passes over a tile is cheaper that doing them over a mesh.

Note that in F+ we can trivially specialize over material features of a given draw, but not at all over lights, and it's even best to make the various lighting paths very uniform (e.g. use the same filtering for shadows) to avoid dynamic branching issues. 
In deferred shading, on the other hand, we can specialize over lights, over texture layer combiners (in the g-buffer pass) and over materials (albeit with worse culling than forward). 

It is true that ypically we're more constrained on the material model as the input data is mostly fixed via the g-buffer encoding, but one can use bit flags to specify what is stored into the MRTs, and with PBR rendering we've seen a sharp decrease in the number of material models needed anyways.

The other advantage is of wave efficiency. In a deferred system, only the g-buffer pass uses the rasterizer, and thus is subject to rasterizer inefficiencies: partial quads on triangle edges, overdraw, partial waves due to small draws.
This is though very hard to quantify in practice, as there are lots of ways to balance computation on a GPU. 

For example, a forward system with very heavy shaders might suffer a lot from overdraw, and require spending time in full depth pre-pass to avoid having any, but the pre-pass might overlap with some async compute, making it virtually free.

- Cutting the pipeline "early"

Recently there have been lots of deferred systems that cut the pipeline "high", near the geometry, before texturing, by writing only the data that that the vertices carry, or even just enough to be able to fetch the vertex data manually (e.g. triangle index and barycentric coordinates, the latter can even be reconstructed from vertices and world position). These approaches create so-called "visibility buffers" instead of g-buffers.


Eidos R&D tested a g-buffer that is used only to improve wave occupancy
and avoid overdraw, not to implement deferred rendering features


These techniques are not aimed at implementing rendering techniques that are different from what maps well to forward, as they still do most of the computation in a single pass. 
What they try to do instead is to minimize the work done in pixel shading, to restructure computation so most of the work is done without the constraints imposed by the rasterizer.

The aim for most of these techniques is:
  1. To write thin g-buffers while still supporting arbitrary material data
  2. To avoid partial quad, partial wave and overdraw penalties
  3. Some also focus on analyzing the geometric data to perform shading at sub-sampled rates
In theory, nobody prevents these techniques to work with more than one split: after the geometry pass a material g-buffer could be created replacing the tile data with the data after texturing.

Compared to forward methods, the main difference is that we reorder computation in a "screen-space" centric way, all the shading is done in CS tiles instead of PS waves of quads.

It avoids partial waves, but at the cost of worse "culling": you have to shade considering all the features needed in a tile, regardless of how many pixels a feature uses, you can't specialize shaders over materials (unless you store some extra bits in the visibility buffer and summarize them per tile).
You also "get rid" of a lot of fixed-function hardware, you can't rely on optimized paths to load vertex and interpolate vertex data, compute derivatives/differentials (which become a real, hard problem!) and post-transform cache (albeit it would be possible to write from the VS back into the vertex buffer, if really needed)

Vertex and object data access becomes less coherent (as now we access based on screen-space patterns instead of over surfaces), supporting multiple vertex formats also becomes a bit harder (might not matter) and tessellation might or might not be possible (depending on what data you store).

Compared to deferred shading, we have similar trade-offs that we have with standard forward or foward+ versus deferred: we don't have screen-space material data for effects that need it, and we do all the shading in a single pass, thus statically specializing a shader needs to take care of more permutations, but we save on g-buffer space.

Note though that how "thin" the g-buffer is per-se is misleading in terms of bandwidth, because the shading pass uses the g-buffer only as an indirection, the real data is per vertex and per draw, these fetches still need to happen, and might be less coherent than other methods.
And we still have a bit of bandwidth "waste" in the method (similar to how g-buffers do waste reading/writing data that the PS already had) as the index buffers and vertex position data is read twice (through indirection!), and depending on the triangle to pixel ratio, that might be even not insignificant.

- Beyond screen-space...

And last, to complete our taxonomy, there has been recently some renderers that decided to split computation storing information in uv-space textures, instead of screen-space. 

These ideas are similar to the early idea of "surface caching" employed by Quake and might follow quite "naturally" if one has already a unique parametrization everywhere in the world.

These systems are very attractive for subsampling computation, both spatially and temporally, as the texture data is not linked to a specific frame and rasterized samples.

If the texture layering is cached, then the scheme is similar to a g-buffer deferred system, just storing the g-buffer in texture space instead of screen space, and it can be coupled with F+ or other deferred schemes that "split early" to reduce the complexity of the shading pass (as the texture layering has already been done in specialized shaders).

If the final shaded results are stored, the decoupled shading rate can also be used as a mean of improving shading stability: even without supersampling aliasing doesn't produce shimmering as the samples never move, and texture sampling naturally "blurs" a bit the results.


Decoupling visibility from shading rate. A good idea.

Caching computation is always very attractive, so these techniques are certainly promising, and the tradeoffs are easy to understand (even if they might not be easy to quantify!). How much of the cache is invalidated at any given time? At which granularity does it need to be computed (and how much waste there is due to it)? How much memory does the cache need?

Fight Night Champion computed diffuse lighting in texture space,
all the fine skin details come only from the specular layer.

- Conclusions???

As I said, it's hard to make predictions and it's hard to say that one method is absolutely better than another, even in quite specific scenarios.

But if I had to go out on a limb, I'd say that right now, for this generation of consoles the following applies:
  • "Vanilla" deferred shading works fine and supports lots of nice rendering features. 
    • In theory, it's not the most efficient rendering technique, simply from the standpoint that it spends lots of energy pushing data in and out memory...
    • But for now it works, and it will likely scale well to 2k and probably even 4k or near 4k resolutions, using reasonably thin buffers.
  • Deferred shading executes well enough in the following important aspects, that probably need to be addressed by any shading technique:
    • The ability to specialize shaders, even if we have architectures with good dynamic branching capabilities (and moving data from vgpr to sgpr on GCN), is quite important and saves a lot of headaches (of trying to fit every feature needed in a single, fast ubershader).
    • Separating and possibly caching or precomputing (most of) texture compositing is important. Very high frequency tiled detail layers will still need to be composited in screen-space.
    • Ameliorating issues with overdraw and small triangles/draws.
  • On top of these, deferred supports well a number of screen-space rendering features that are popular nowadays.
  • Forward+ can be made fast and it works best when lots of surface data (vertex & texture...) is needed.
    • Different material models are probably not a huge concern (LODs might be more attractive, actually), and deferred shading can be specialized over materials as well, with some effort (and worse culling).
    • Forward undeniably will scale better with resolution, but might have a slower "baseline" (e.g. 1080p)
    • Mapping data to surfaces (e.g. lightmaps, occlusion cones...) allows for cheap and high-quality bakes, but it doesn't work on moving objects, particles and so on, so it's usually a compromise: it has better quality for static meshes, but it lacks the uniformity of volumetric bakes.
  • Single pass forward, when done properly, can still very, very fast!
    • Especially in games that don't have too many small triangles and don't have many small or moving lights.
    • That's still a fairly large proportion of games! Lots of games are in daylight, or anyhow in settings where there aren't many overlapping lights! It is not simple to optimize though.
  • Volumetric data structures are here and going to stay, we'll probably see them evolve in something more adaptive than the simple voxel grids that we use today.
  • Caching is certainly interesting, especially when it comes to flattening texture layers (which is quite common, especially for terrain). 
    • Caching shading is a "natural" extension, the tradeoffs there are still unproven, but once one has the option of working in texture-space it's hard not to imagine that there isn't anything of the shading computations that could be meaningfully cached there...
  • Visibility buffers
    • If g-buffers passes are not bandwidth or ROP/export bound (writing the data), the benefit of "earlier" splits is questionable. But these techniques are -very- interesting, and might even be used in hybrid g-buffer/attribute buffer renderers.
    • The general idea of using deferred methods to cluster pixels via similarity and subsample shading is very interesting... 
    • The same applies to trying to pack waves without resorting to predetermined screen-space tiles (e.g. via stream compaction, which the "old" stencil volume deferred methods did automatically via the early-stencil hardware). None of these have been proven in production so far.
  • It would be great to see more research on hybrid renderers in general
    • Shaders can be written in a "unified" fashion, the splits can be largely automatic
    • Deferred shading and F+ share the same lighting representation!
    • A rendering engine could draw using different techniques based on heuristics
  • On the other hand, there has been recently lots of work on "GPU driven pipelines", where most of the draw dispatch work (and draw culling) is done on the GPU.
    • These pipelines favor very uniform draws (no per-draw shader specialization)
    • This might be though entirely a limitation of current APIs...

10 July, 2016

SIGGRAPH 2015: Notes for Approximate Models For Physically Based Rendering

This is a (hopefully temporary) hosting location for the course notes Michal's Iwanicki and I drafted for our presentation at the Physically Based Shading Course last year.

I'm publishing them here because they were mentioned a lot in our on-stage presentation, in fact, we meant that mostly as a "teaser" of the notes but we are still not able to bring them out of the "draft" stage, despite the effort of everyone involved (us and the course organizers, to whom goes our gratitude for the hard work involved in making such a great event happen).

It also doesn't help that in an effort to show an overall methodology, we decided to collate more than a year of various research efforts (which happened independently, for different purposes) into this big document. I still have to work more on my summarization skills.

06 July, 2016

How to spot potentially risky Kickstarters. Mighty No9 & PGS Lab

This is really off-topic for the blog, but I've had so many discussions about different gaming related Kickstarters that I feel the need to write a small guide. Even if this is probably the wrong place with the wrong audience...

Let's be clear, this is NOT going to be about how to make a successful Kickstarter campaign, actually, I'm going to use two examples (one of a past KS, and one of a campaign that is still open as I write) that are VERY successful. It's NOT even going to be about how to spot scams, and I can't say that either example is

But I want to show how to evaluate risks, and when it's best to use a good dose of skepticism because it seems that there is a lot of people that get caught in the "hype" for given products and end up regretting their choices.

The two example I'm mostly going to use are the following campaigns:
I could have picked others, but these came to mind. It's not a specific critique to these two though, and I know there are lots of people enjoying Mighty No.9, and I wish the best to PGS Labs, I hope they'll start by addressing the points below and proving my doubts unfounded.

The Team

This is absolutely the most important aspect, and it's clear why. On Kickstarter you are asked to give money to strangers, to believe in them, their skills and their product. 
Would you, in real life, give away a substantial amount of money to people, for an investment, without knowing anything about them? I doubt it.

So when you see a project this successful...


...first thought muse be, these guys must be AMAZING, right?


I kid you not, that's the ONLY information on the PGS Lab team. They have a website, but there is ZERO information on them there as well.


From their (over-filtered and out-of-sync) promo video, we learn one name of a guy...


"We have brought together incredible Japanese engineers and wonderful industrial designers". A straight quote from the video, the only other mention of the team. No names, no past projects, no CVs. But they are "wonderful", "incredible" and "Japanese", right?

This might be the team. Might be buddies of the guy in the middle...
For me, this is already a non-starter. But it seems mine is not a popular point of view...

The team?

So what about Mighty No.9 then? Certainly, Inafune has enough of a CV... And he even had a real team, right? He even did the bare minimum and put the key people on the Kickstarter page...



Or did he? Not so quickly...


This is the first thing I noticed in the original campaign. Inafune has a development team (Concept) but it seems that for this game, he did intend to outsource the work.

Unfortunately, not an unusual practice, it seems that certain big names in the industry are using their celebrity to easily raise money for projects they then outsource to third party developers.



Igarashi for Bloodstained did even "worse". Not only the game itself is outsourced, but the campaign, including the rewards and merchandise, are. In fact, if you look at the KS page, you'll notice some quite clashing art styles...


...I suspect this was due to the fact that different outsourcers worked on different parts of the campaign (concept art vs rewards/tiers).

Let's be clear, per se this is not a terrible thing, both Igarashi and Inafune used Inti Creates as the outsourcing partner that has plenty of experience with 2d scrollers, which means the end product might turn out great (in fact, the E3 demo of Bloodstained looks at least competent if not exceptional)... But it shows, to me, a certain lack of commitment.

People are thinking that these "celebrity" designers will put their careers on the line, against the "evil" publishers that are not funding their daring titles (facepalm), while they are just running a marketing campaign.

This became extremely evident for Inafune in particular, as he rushed launching a (luckily disastrous... apparently you can't fool people twice) second campaign in the middle of Mighty No.9 production, revealing his hand and how little commitment he had to the title.

The demo: demonstrating skills and commitment

Now, when you got the team down, you want to evaluate their skills. Past projects surely help, but what helps, even more, is showing a demo, a work-in-progress version of the product.

It's hard enough to deliver a new product even when you are perfectly competent, I've worked in games done by experienced professionals that just didn't end up making it, and I've backed Kickstarters that failed to deliver even if they were just "sequels" of products a given company was already selling... So you really shouldn't settle for anything less than concrete proof.

How does our Kickstarters fare in terms of demos?


PGS Labs show a prototype. GREAT! But wait...


Oh. So, the prototype is nothing ore than existing hardware, disassembled and reassembled in a marginally different shape. In fact you can see the PCBs of the controller they used, a joypad for tablets which they just opened, desoldered some buttons and moved them into a 3d printed shell.

Well, this would be great if we are talking about modding, but proves exactly NOTHING about their abilities to actually -make- the hardware (my guess - but it's just a guess, is that in the best scenario they are raising money to look for a Chinese ODM that already has similar products in their catalog, and they won't really do any engineering).

Of course, when it comes to the marketing campaigns of "celebrity designers" all you get is what is cheaper to make, they know they'll get millions anyways, so, just get some outsourcers to paint some concept art


It's really depressing to me how, by just creating a video with their faces, certain people can raise enormous amounts of money. And I know that there are lots of success stories, from acclaimed developers as well, but if you look at them, the pattern is clear: success comes from real teams of people deeply involved with the products, and with actual, proven, up-to-date skills in the craft.

While so far I'd say all the projects of older, lone "celebrities" have -all- resulted in games that are -at best- ok. Have we ever seen a masterpiece coming out from any of these? Dino Dini? Lord British

Personally, as a rule of thumb I'd rather give money to a "real" indie developer, who really can't just go to a publisher in lots of cases or even self-fund borrowing from a bank, and that often do MUCH, MUCH better games by real passion, sacrifice, and eating lots of instant noodles I assume...

The "gaming press"

What irks me a lot is that these campaigns are very successful because they feed on the laziness of news sites where hype spreads due to the underpaid human copy and paste bots who just repeat the same stuff over and over again. It's really a depressing job.

And even good websites, websites where I often go for game critique and intelligent insights, seem to be woefully unequipped to discuss anything about production, money, how the industry works. 

I'm not sure if it's because gaming journalists are less knowledgeable about production (but I really doubt it) or if it's because they prefer to keep a low profile (but... these topics do bring "clicks", right?).

Anyhow. I hope at least this can help a tiny bit :)