Sprite Lamp release plans

This is the post where I talk about the near future of Sprite Lamp and releasing on various platforms. Yesterday I posted about Spine integration, and I’m also working on the tools for editing depth maps. There are a couple of other smaller features I’m hoping to have added at some point, such as loading of custom shaders, but when the depth editing is done, Sprite Lamp (the standalone application, for Windows) is getting startlingly close to finished. Given that, I’m going to try to make the next release a Steam release. Fingers crossed, I can make this happen in something like the next couple of weeks.

Questions of Steam early access

I’ve given a lot of thought to early access, because I have slightly mixed feelings on it in practice. I don’t want to be ‘one of the bad ones’ – I feel like there’s a certain amount of frustration going around with certain early access projects for various reasons, and I don’t want Sprite Lamp to be one of those.  On the other hand, betas are useful for various reasons, and a platform like Steam is a good way of carrying them out. There can also be hiccups in the implementation of being on Steam itself – it’d be nice to have a chance to make sure Steam-specific things work (for instance, upgrading from hobbyist to pro will be done using the DLC system, I need to make sure that works smoothly) before it goes to a bigger audience.

So I’m pretty open to suggestion on this strategy, but currently I’m thinking of trying to get the Windows build out as a closed beta (which will involve me trying to get Steam keys to everyone who has already got Sprite Lamp by Kickstarter or through PayPal). Once that’s sorted, we’ll go into early access – at that point, it’ll be at the current (slightly) discounted Kickstarter/early price of $35/$90. When it releases fully, it’ll be for $40/$100. I’m going to try to make this early access period as quick as possible, though. During the early access period, I’ll also try to get a maximally polished Linux build onto Steam. The current Mac build isn’t really Steam-ready, though – a fact probably known to those who have tried using it. That brings us to the next section.

Sprite Lamp for MacOS

For a while I’ve been in contact with a fellow I know by the name of Rob Caporetto who was potentially going to help me out with the native UI version of Sprite Lamp for MacOS. He’s got some experience with MacOS and C#, which isn’t the most common combination of skills, so it seemed pretty ideal. Unfortunately he wasn’t free to work on stuff until fairly recently, but I’ve been in contact with him and he’s had some time to look over the source code. We’ve talked about what needs to be done and how to do it, and come to some conclusions:

  • It’s probably not worth holding up the other releases to wait for this one. I didn’t make this decision lightly, because I had always wanted a simultaneous release, but at the same time, it seems pointless to keep a working build out of Windows and Linux users’ hands just for the sake of that (especially when those two groups are likely well over half the users).
  • There’s a small amount of work for me to do on the codebase to get it ready for porting, which I’m going to jump into as soon as the depth editing stuff is done.
  • It’s probably somewhere in the ballpark of a month of work. As such, I’m going prioritise doing as much as possible of the port stuff myself, especially the grunt work, and Rob can spend all his time on the hard bits. That way it’ll be in Mac users’ hands as soon as possible.
  • More than was the case with the rest of the development, we’ll be able to prioritise the adding of features so that the most commonly used ones are in first, and as soon as we have something pretty usable, we’ll put it out there (on Steam, if we’re at that point by then, or otherwise in a regular update).
  • More as it develops.

So, that’s that. I’m pretty damn excited to see the first build working on MacOS, but not half as much as Halley is (being the artist behind Sprite Lamp’s sample art, and a mac user).

More engine integration

The other part missing from this story is the future of engine integration. More than any other part of the development of Sprite Lamp, engine integration has taken me off guard scheduling-wise, because it’s hard not to get tangled up in small details when you’re working with unfamiliar tech. However, once this next release is sorted, I’ll be able to get into doing some of the integrations closer to full-time. First priority will be Unity since so many people use that, then I’ll look at rejiggering some of the Game Maker stuff, and then basically I’m going to go through the history of people requesting engine integrations and try to get as many sorted as I reasonably can. So far I’ve been including engine integration stuff with Sprite Lamp, but when there’s a bit more complete I’ll be making them all available for free download here.

Sprite Lamp and Spine

As I’ve mentioned recently, I’ve been playing with Spine and Sprite Lamp. It’s time to talk a bit about how that’s going and what it means.

Spine files in Sprite Lamp

The main thing I’ve been working on lately is loading up Spine files in Sprite Lamp, so you can view them, animated and dynamically lit, in Sprite Lamp’s preview window. And, it’s working! It looks something like this:

Sprite Lamp with an example Spineboy animation loaded
Spineboy artwork and animation is from Esoteric Software, with our own lighting profiles/normal maps applied.

As you can sort of see from the screenshot, the interface is not terribly complex – you load up a .json file and you have your character. You’ll also need to (draw and) load up lighting profiles in atlas form (if the files are named properly Sprite Lamp will automatically grab them), and then off you go. You can select any animation in the Spine file and also select different skins. You can also control the speed and direction of the animation playing.

The walk is a bit slower than I’d like, perhaps, but here’s an illuminated Spineboy, with a static light to make it easier to see what’s happening:

Spineboy walking with dynamic lightingThere’s a certain amount of dodginess regarding the bending of the knees, though for artwork that was made without dynamic lighting in mind, I think he came out pretty well. The fact that this is truly dynamically lit, rather than just faked, is most evident on the parts that are rotating the most, particularly the right hand.

The intricacies of rotating normals

It might be helpful, particularly for programmers, to have a bit of a grip on the maths behind this kind of thing. If that’s not your bag, you might want to skip this section.

It’s tempting to think that all you need to render a normal-mapped Spine character is the ability to render a normal mapped sprite combined with the ability to render a conventional Spine character. Unfortunately, the rendering side is slightly more complex than that (NB: I’m just talking about the shader maths here – these complications are not the artist’s problem).

Naturally, rendering a Spine character involves moving and rotating various images about. This is nothing special – rotating a textured quad is something computers have been able to do happily for decades, after all. You paint your character’s leg image or whatever, then you rotate it, and you’re all good. However, when normal maps become involved, things get slightly more complex. If a normal map has a pixel that encodes a vector that faces to the right, and you rotate the whole image 90 degrees clockwise, that pixel should now be facing down. But, its colour hasn’t changed – it still encodes ‘faces to the right’. If you rotate it another 90 degrees, so it should be facing to the left, it still appears to be ‘facing to the right’. This problem gets worse the more the character rotates. Some Spine enthusiasts reading might remember ages ago when Sprite Lamp was in its Kickstarter phase, we had a few shots at Spineboy walking around with dynamic lighting. If I recall correctly, this problem was present in those demos. It was very subtle, because no part of Spine Boy rotates a huge amount during his walk cycle, but it was there. Fortunately, the solution is not terribly complex – you simply have to tell the shader how much each body part has rotated from its default position, and rotate the normal by that amount in the pixel shader Done!

But, you might justifiably be wondering: what of soft skinning? After all, the fine folks at Esoteric Software have been hard at work following their recent Kickstarter adding soft skinning and free form deformation (FFD) to Spine. It’s not obvious what it means to refer to ‘rotation’ when the rotation of a point is going to vary throughout a mesh. And, indeed, this does get a bit more complicated. For Sprite Lamp, I’ve decided to go with a fragment-shader solution to this problem. It involves using the derivative functions in GLSL to compare the the world position and UV coordinate of a fragment with its neighbouring fragments, which enables you to calculate a thing called a TBN matrix (tangent, bitangent, normal). When this release comes along, I’ll talk a bit more about how this is done in the shader. The takeaway is that in the next release, Sprite Lamp should smoothly handle all the different types of animation Spine can throw at it – textured quads, but also soft skinning and FFD. As a demonstration/stress test, Halley has cooked up a slithering snake animation. She wanted me to make it clear that this is not her best work and she’s not very experienced with Spine animation. As far as a clear demonstration of variable rotation on a soft-skinned mesh, though, this does nicely:

Snake slithering to the leftYou can see here that with the light source on the left, the coils of the snake are picking up the lighting correctly as they wave. It’s not obvious for all animations when the normals aren’t being computed correctly, but in the case of this snake it’s something of a stress test.

Spine and Sprite Lamp in your game engine

So, so far the work has been on getting Spine animations displaying in Sprite Lamp. You might be wondering when you can actually put this in your game.

Unfortunately, the answer to that depends on a bunch of factors. I’m only one programmer, and the crossover between Spine and Sprite Lamp is only one of the many parts of Sprite Lamp. Once Sprite Lamp is out (like, out out, not in alpha like it is now) I’ll be able to give a much more thorough look into engine integration. My policy with Spine will be similar to my policy with the rest of Sprite Lamp’s engine integration: I’ll do as many as I reasonably can myself. Since there are a lot of engines in the world, and I can only cover a few of them, I’ll also do my best to document everything you need to know as a programmer to get things working yourself, be it in an existing game engine that I haven’t been able to cover, or in your own hand-coded system.

That all being said, somewhat predictably, my priorities towards the most widely-used engines will remain, meaning Unity will be first on the list. I’ll be posting more of that as I know it.

Unity Palette Shader first pass

Hi all,

You know how sometimes, after you’ve been working on some important thing, you get sick after you finish it? Like your body was holding off on getting sick until it could get away with it? Maybe it’s just me, but either way, I got sick right after putting the new version of Sprite Lamp out there. Anyway, sorry it took me a few days, but here’s my first crack at the palette shader from Sprite Lamp, in Unity form.

ZombiePaletteUnity
Come for the palette zombie, stay for the flawless framing of the screengrab.

 

A couple of important things to note:

  • It basically works as you’d expect, I think. Put the textures from Sprite Lamp in the appropriate texture slots on the material and you should be good.
  • Currently it makes use of the diffuse map, just to get the opacity value from it. The obvious thing to do is put the opacity in the index map’s alpha channel, but since Sprite Lamp doesn’t output them that way automatically yet, I made the shader so it works readily with what Sprite Lamp exports.
  • You can get some very crappy results if you don’t load your textures in as ‘uncompressed’. Compressing the palette map in particular will pretty much make everything awful (or at least, that’s what happened when I tested it).
  • The palette system is designed to work without coloured lighting. I might hack something in later that just multiplies the output by the light colour, but that kind of misses the point of having close artist control over what colours are rendered to the screen. Ultimately, if you’re making use of the palette system, the key is to set the lighting mood via the palettes.
  • Since this is a straightforward shader-based implementation, it makes use of simple additive lighting to handle multiple lights. Technically, this isn’t quite correct, but I think for most cases it should look fine. I hope to work on a more complete and correct approach to this in the future, but it might get a little hairy (might require something resembling a deferred rendering pass). I’ll go into some detail later as to how this might work.

Anyway, that’s it! As you can probably guess, this shader is a work in progress, and will change in the future – both in terms of how it works under the hood and how you use it as a developer. Let me know if it’s giving you trouble, failing to compile, or if it’s not clear how to make use of it, and I’ll try to straighten things out.

Sprite Lamp’s palette system

Yesterday I released the latest update to Sprite Lamp, and with it, a properly implemented and documented palette system. Still pending are shaders for various engines to make it usable in your games, but in the mean time, I’m going to give a quick rundown of what it is, how it works, and how the (really pretty simple) shader works.

Why palettes?

This is something that came up due to repeated requests from artists. As a graphics programmer, my first thought was to calculate the light based on the angles of the light and the surface normal using standard Lambertian and basically call it a day. However, as several artists pointed out to me, this means that when a part of an image gets darker it approaches black, and artists very rarely use real black in their paintings. Shadows are more likely to have some blue to them, and highlights are likely to be slightly yellow rather than white. There are various ways to get around this – tweaking the ambient and direct light colour, for instance – and while those options remain, I thought it would be good to give artists more direct control. Hence the palette system.

This works by taking the diffuse map (that the artist creates), taking all the unique colours from it, and creating a sort of template palette image. This palette is then saved out and modified by the artist to get the effects they want. A palette image has a vertical column of pixels for every unique colour in the depth map. The vertical position in this column represents the colours that those pixels will be at different lighting levels. The midpoint of each column is the colour when that pixel is lit with full diffuse lighting but no specular lighting. Below that it fades as the diffuse lighting drops away to nothing, and above that it represents the colour when an increasingly strong specular highlight is added. Sprite Lamp can be used to generate either an ’empty’ palette that will create flat lighting (the columns are the same colour all the way up), or a palette that will produce simple calculated lighting, fading to black at the bottom and white at the top.

As an example, here is the diffuse channel of the Sprite Lamp zombie:

Diffuse image of a zombie

And here are the autogenerated palettes for said zombie (with and without default lighting built in):

EmptyAndDefaultPalettes

From here, you save one of these images out (whichever would work best for you as a guide) and then change the colours as you see fit. The only really important thing is that you generate the index map (more on that later) and then make sure you keep the horizontal positions of the pixel columns in the same place/order in the palette.

Here are some examples of effects that are possible using this system, with the zombie’s palette image visible on the inset.

ZombiePalette2
This is a simple case of adding some blue to the shadows and some yellow to the highlights to add some depth to the result.
ZombiePalette3
In this image, we’ve done something like a traditional palette swap, to make a different colour scheme entirely.
ZombiePalette4
This one makes use of the Dawnbringer palette – it uses fewer colours and gives a more retro look.

The Shader

So, as promised, I’m going to give a quick outline of how the shader works. It’s actually not very complicated, but it hinges around something that Sprite Lamp will generate for you, called an index map. You’ll need a normal map, as usual, and then a diffuse map, and from the diffuse map Sprite Lamp will generate the palette map template (which you then modify) and the index map (which you don’t). The index map for the zombie looks like this:

zombie_Index_Bigger

The grey values in this image are actually a value from 0-1, representing the horizontal position in the palette map where that column of colours resides. Black represents the leftmost column of the palette map, then the second darkest grey (the hair) will be the second column across, and so forth. This is why it’s important to keep the columns of the palette map in the same place.

As the shader programmer, all you need to do is calculate some lighting value between zero and one (the way Sprite Lamp’s palette shader does this is by averaging the diffuse and specular components and then clamping to the correct range). Then, a look up into the index map will get you a luminosity value between zero and one. From there, you do a look up into the palette map, using the value read from the index map as your U coordinate, and the calculated level of illumination from your lighting algorithm of choice for the V coordinate, and you have your final colour. Because the whole point of the palette system is to give the artist precise control over the colours that end up on the screen, nothing more is done – the colour is outputted onto the screen. The pseudocode for the shader might look something like this:

float colourLevel = (diffuseLevel + specularLevel) * 0.5;
float indexPosition = tex2D(indexMap, textureCoords.uv).r;
//Note that we use 1.0 - colourLevel because usually positive V
//goes down the texture map.
vec3 finalColour = 
         tex2D(paletteMap, vec2(indexPosition, 1.0 - colourLevel);

The generation of the index map

For the most part this isn’t something you’ll be worried about, but there are certain circumstances where it’s important to know the details. Sprite Lamp generates a palette map from the diffuse map to use as a template, but then it generates the index map from the diffuse and the palette maps. It does this by going through every pixel of the diffuse map, and then searching for that pixel’s colour in the middle row of the palette map (that is, it looks through the row of pixels half way down the palette map). When it finds the colour it’s looking for, it will use the horizontal position of that to determine the greyness of that pixel in the index map.

This probably isn’t terribly relevant to most use cases, but it’s possible that you will want to use a palette map for more than one piece of art in your game (to render a whole scene coherently, for instance). In that case, you’ll want to make your own palette map from scratch, making sure the important colours are present all the way along the centre. Having done that, paint all your diffuse maps using only colours from that centre row of pixels. Then, load up a given diffuse map, and rather than generating a palette from it, open up the palette you made into the palette map, then click the ‘generate index’ button. If the diffuse colours are all present in the palette’s central row, Sprite Lamp should generate a fully functional index map without issue – it doesn’t matter that there are colours in the palette not present in the diffuse.

Conclusion

I would imagine that this problem has been tackled by other developers at various points through gaming history – this approach is just what came to mind to answer artists’ complains about black shadows. Given that, this explanation is straight from my brain to the page, as it were. Please, let me know if there’s anything I’ve been unclear on, so I can fix my explanation.

More importantly, if you do anything cool and unexpected with this system, I’d be keen to see your work. I’ve already been surprised/impressed with Halley’s implementation of the Drawbringer palette from above, and I’m confident that various cel shading and other techniques are possible using this system too.

Sprite Lamp: A minor update and apology for lack of communication

Hi all,

I’m coming at you a bit humbly today, because a comment someone made to me recently called me on being behind in my schedule and generally not telling people what’s up.

I’ll start by saying that this isn’t the dreaded “Sorry, this project isn’t going to happen” post. Sprite Lamp has been and continues to be in development, and all the promised features are on their way.

What this post is, is me apologising for being less productive than I would have liked over the last two months, and not being very communicative about that fact over that time. I have been in the US recently – for personal reasons I won’t go into, it would have been awkward for me to not go on the trip, so I convinced myself that I could just take my laptop and keep up work on Sprite Lamp while I was there. As it turns out, this was kind of foolish on my part, and in retrospect I should have just stayed home. I did keep working while I was there, but productivity has been lower than I hoped, and I haven’t been very good at keeping you all up to date on this. The current state of play is that I’m working on a few things for the next alpha release – UI overhaul, functional/usable palette system, updating some issues with engine shaders, and updated documntation are all coming.

The other thing I haven’t communicated about well is release dates. Initially, I stated that Sprite Lamp would be out approximately now. This was when the Kickstarter was initially written, and it was a small project without any stretch goals. It’s grown beyond that, and of course the stretch goals were things that I hadn’t done any development on, some of which have turned out to be slightly rabbit-hole-ish. At this point I’m hoping for one last alpha release in a week or so, and the first beta release in about a month.

So having said all that, I’m now back home and in a much better state to work in an undistracted fashion. I’ve always been a bit shy about social media and the like, but I’m going to put in an extra effort to be forthcoming on that front, too. For now, I figure the best way to make amends is to get right back into coding, so that’s what I’m going to do.

~ Finn

Per-texel lighting

So, I’ve been pretty negligent about documenting this one last feature in the Sprite Lamp shader. It’s rather obscure and difficult to explain, but I’m going to try to overcome that with pictures.

Pixels, texels, and fragments

These three things can get pretty confusing during the coming article, so I wanted to clear a few things up.

You’ve probably heard of pixel and fragment shaders, and observed that they’re basically the same thing. Well, they are. Technically, I believe, the correct term is ‘fragment shader’. A pixel (‘picture element’) is a single tiny rectangle of colour on a computer screen, whereas a fragment is also depth information and a few other things. A fragment might never become a pixel, because it might fail a depth test or whatever. The wikipedia page on fragments is a bit helpful. Either way, it’s good to know the difference, but this isn’t the most important distinction.

The distinction between a pixel and a texel is more important, especially if you’re a pixel artist. In a sense, I suppose, you could be described as a texel artist – texel means ‘texture element’ – basically, it’s one of the coloured rectangles that make up a texture. This is relevant because sometimes you draw a texture to the screen at a size other than its native size – in fact, in 3D games this is almost always what happens. That’s what filtering is for – bilinear filtering, for example, is a maxification filter – a way of zooming in on a texture – without it appearing blocky.

Per texel lighting

This is probably nothing new to artists who are used to working with computer graphics. The point is, when you’re viewing pixel art such that it is deliberately blocky (using nearest neighbour filtering (aka point filtering)), each texel is made up of many pixels. That is, each pixel in the source art is blown out to be many pixels on the monitor. This is the important thing to understand, because when a fragment shader operates on this artwork, it’s doing a lighting calculation once per pixel, NOT once per texel.

The result of this is that, even with the diffuse/normal/everything maps set to nearest neighbour filtering, you end up with colour variation within a texel. Almost always, this is imperceptible and doesn’t matter. However, when combined with cel shading, you can run into trouble. Here’s an example of what I’m talking about:

TexelVersusPixelBecause I have been staring at this kind of thing for a while now, it’s hard for me to tell how obvious the difference between these two images is. If it’s not immediately apparent to you, the point is that the image on the right has diagonal lines – discontinuities in the lighting level – running across texels. The image on the left could just be pixel art with static lighting – the image on the right definitely isn’t. This problem becomes less clear if the lighting is changing rapidly, but more clear if the light source is moving, but slowly. To me, and I daresay to various pixel purists out there, this is a problem worth fixing.

As you can see from the screenshot above, this has been solved in the Sprite Lamp shader. However, Sprite Lamp’s preview window makes this problem easier to solve than it is in the general case. I’m still working out the details of solving the general case, although I’ll document them carefully in the Unity shader so it can be reproduced elsewhere.

Until then, though, I have a question for anyone reading – would this feature be important to you? As mentioned, to me the visual problems are obvious, but I can understand that for many they wouldn’t be. Of course, it’s not relevant for high res art, and it’s mostly not relevant for low res art either unless there’s cel shading involved. Even then, arguably the problem being solved is fairly subtle. I’m curious to hear your thoughts.

 

Engine integration so far

Hi folks! I’ve been spending some time lately investigating a few of the most commonly requested engines for Sprite Lamp integration – namely Unity, Game Maker, and Construct2. The results so far have been pretty positive – I’m hoping to get at least some support for these three engines made public before I head to GDC. With this article, hopefully people who have been wondering about these three engines will come away with a better idea of what will be possible.

Unity

So, Unity takes the lead as most commonly requested (even though I think mostly people assume it goes without saying). Fortunately, despite it being used for 2D games very often, it is a 3D engine, and that means it has a lighting engine, which means this will be both a smooth integration and a feature-complete one.

As some may have seen, this article by Steve Karolewics of Indreams Studios details an initial attempt at getting Sprite Lamp lighting working in Unity. Steve (along with everyone else who has ever used Unity for anything) knows more about Unity than I do, and I’ve taken his work as a base for the full shader (currently a work in progress). So far, I’ve been working on the hardest part – self-shadowing. Here’s progress so far (hint: it’s working).

The light on the left is closer to the stone wall than the one on the right, which affects shadow length and attenuation.
The light on the left is closer to the stone wall than the one on the right, which affects shadow length and attenuation.

The big part that remains to be done here is that the Sprite Lamp shader has to know the resolution of the textures (for per-texel lighting and for shadowing). At present (for the above screenshot) it’s just hardcoded, but obviously that’s not a long term solution – I’m going to write a little script that runs at load time that sets up the relevant shader variables automatically, so the user doesn’t have to worry about it.

The important thing to note with the Unity integration is that, as mentioned, it already has a lighting system. Light entities already exist in the game, and they will work with Sprite Lamp as you’d expect. Multiple lights work as expected, and Unity will automatically handle stuff like calculating which lights affect a given object. You can also place lights in 3D space (that is, vary their depth position) and have everything work as expected, because as mentioned, Unity is at its heart a 3D engine.

Basically, everything that can be done in the Sprite Lamp preview window is going to be achievable in Unity. Other features that I will most likely add at some point will be support for tessellation (making use of the depth map) which might be cool for the sake of parallax or 3D, and (pending further investigation) support for the deferred rendering pipeline.

Game Maker

I often say that I don’t have a clue about Unity, and that’s pretty much true, but I have at least used it before. That is not the case with Game Maker – I literally opened it for the first time the other day to investigate stuff. It’s therefore possible that I’m missing obvious things – if so, please let me know!

That said, the results of this investigation have been positive. The short answer is, lighting is go:
GameMakerSL

This might not look like much, and it’s not as far along as the Unity integration, but the important thing is that I can load my own custom shader – from here on in, being able to get cel shading and self shadowing working is kind of just details.

At present, there’s some boilerplate scripts that go along with this too – one that you’ll attach to the Create event of your dynamically lit objects and one that goes with the Draw event – which sets appropriate shader variables and that kind of stuff. My understanding is that Game Maker is actually usable without any custom scripts at all, so for the benefit of users who aren’t scripters, I’ll keep the interface with that script as small as possible and document thoroughly any interactions you have to do with it. They’ll almost certainly be very simple – stuff like setting light levels, etc.

That brings me to the next part of this. Unlike Unity, Game Maker doesn’t have a lighting engine built in – this means that there aren’t, for instance, native entity types for stuff like light sources. At this point, I don’t understand the engine well enough to know what the best solution to this is, but it should at least be possible to have a dummy object that acts as a light source, and then tell the scripts on the lit objects to update their shader variables based on the position of the dummy object.

Slightly more complicated is the question of multiple light sources. At the very least, it is simple to have an hemispheric ambient light and a single point light at any given time. I can also add a few more lights by just making the shader more complex (basically, have multiple shader variables for multiple light positions, and add the results together in the shader). However, having an arbitrary number of light sources becomes more complicated – this will involve building a system that automatically tracks lights and what objects they’re close enough to have an effect on. For the moment, my first release of the integration with Game Maker will be the shader and code snippets necessary to get basic lighting going. I’d rather get a basic implementation going for multiple engines, then return to more in depth work like that later if it’s in high demand.

Construct2

This is the least advanced and most recent investigation I’ve done, but I know a few important things so far, and I wanted to pass them on. I’ve also been in touch with one of the developers of the engine, which helped a lot.

Basically, the way Construct2 works doesn’t naturally lend itself to this kind of thing, because of course a 2D engine designed for ease of use isn’t designed with 3D lighting in mind. But, it can be done, with a few caveats.

The rendering engine of Construct2 works, simply enough, by drawing sprites. In addition to regular sprites, you can draw effect sprites – these have shaders attached to them, and are often used for things like colour shifts and distortion effects. They also have a texture attached to them, and can sample the colour of what they’re drawn over. We can leverage this to make some lighting effects.

My first attempt is based on a shader by a developer named Pode. Here goes:
ConstructLighting

So far so good. The folks from Scirra tell me there’s a built in bump map shader that would be a good basis for me to work from too, which works in a slightly different way (the above example multiplies the effect result with the underlying colour, but you can also directly sample the underlying colour and render the result over the top).

From here, given access to the shader, I can get a lot of Sprite Lamp’s lighting effects going. The things I said in the last two paragraphs on Game Maker apply to Construct2 as well, regarding multiple lights and dummy objects and whatnot.

However, there’s another issue with Construct2 that is a limiting factor – namely, an effect sprite can only have one texture attached to it. Now, the Sprite Lamp shader makes use of a lot of textures, some more necessary than others. The diffuse texture is taken care of – the diffuse channel will be contained in the layer underneath, rendered before the effect sprite. Obviously normal maps are required. If shadows are required, the depth map can probably be squished into the alpha channel of the normal map, except insofar as you will also need that alpha channel for opacity (since it’s a separate sprite) – possibly this can be worked around, however. Emissive maps can simply be rendered as another additive layer for objects that have them – you won’t even need a special shader for that. However, ambient occlusion and specularity/gloss maps might have to be sacrificed.

There is a possible workaround here that involves putting multiple textures into one (that is, literally just the diffuse and normal maps next to each other in one image, for instance) and then extracting them with uv maths in the shader. If this turns out to be worthwhile, I’ll add an option to Sprite Lamp to automate exporting maps in this form. However, I foresee visual issues if this is used on tiling textures if your game has a camera that zooms in and out (bilinear filtering and mipmaps will not be kind in this situation).

The other possible issue is performance. Shader complexity not withstanding, this system requires you to draw at least two sprites for every game entity that needs dynamic lighting. I don’t have much of a feel for how performance gets limited in Construct2, but as an educated guess, if your game already has (or almost has) performance issues relating to the number of sprites on screen at a time, they will probably get worse if you add dynamic lighting to them all – my guess is that this will be a load on the CPU and the GPU.

One last thing to note about Construct2 that the devs warned me about is the renderer. Everything I’ve said here is completely dependent on shader effects, which require a WebGL renderer. Unfortunately, this means that environments that use canvas2d instead won’t have access to any dynamic lighting effects (notably, Internet Explorer and Safari, apparently). Sorry about that – that’s out of my (and Scirra’s) hands.

Other engines

If you’re reading all this wondering when I’ll get to your engine – don’t worry, this isn’t a complete list. It just made the most sense to approach the most commonly requested engines first, but a minimal implementation doesn’t take a terribly long time (assuming it’s possible) for a given engine. Even if it’s an obscure engine that I’m not going to do an official engine for, I’m entirely happy to help you out with getting it up and running yourself. Feel free to give a shout regarding the engine you want looked at (besides these three) in the comments.

Sprite Lamp’s basic shader

So, now that I’m back from Christmas/New Year’s festivities, the first order of the day is to do a proper write up of the basic shader from Sprite Lamp’s preview window. I thought it would be good to get this out there early on, because people who are keen to get cracking with a version of it for an engine of their choice can do so. I’ll eventually be doing an implementation for major engines, but of course I can’t cover all engines myself, and I’m betting some people won’t want to wait for me anyway. With the exception of the self-shadowing, none of this is very complicated – if you’re advanced enough to get a normal mapping shader going, this should be well within reach.

Someone suggested that I make a page to collect links to people who have implemented these shaders in various game engines, and I think that’s a pretty great idea, so if you’re working on such an implementation, get in touch! I’ll be putting a page up here soon with that information, and eventually I’ll add my own integrations there when they get done.

First off, if you backed Sprite Lamp (which you can still do via PayPal, by the way), you already have the shaders in plain text. They might have some dodgy commented out code, and they’re not well documented, but they’re there in the ‘Shaders’ directory. The one I’m going to talk about for the moment is StandardPhong.fsd. As the name suggests, fundamentally this is based on the Phong illumination model (not to be confused with Phong shading).

I won’t talk too much about Phong illumination because it’s pretty common and well-documented elsewhere (including at that wikipedia link). Fundamentally, it consists of ambient lighting (pretty much just a constant value), diffuse lighting (which is calculated as the dot product of the light direction and the surface normal), and specular lighting (which is slightly too complicated to sum up in a sentence, but there’s more detail at the wikipedia page). I’m going to go over the modifications I’ve made in the Sprite Lamp version.

Cel shading

This is a fairly simple trick – the more complex palette-driven cel shading will be a subject of another blog, once the tech is finalised. Simply put, this applies a step pattern to the illumination level of the pixel. The relevant piece of code is this:

diffuseLevel *= cellShadingLevel;
diffuseLevel = floor(diffuseLevel);
diffuseLevel /= cellShadingLevel - 0.5;

By this point in the shader, ‘diffuseLevel’ should have been calculated to contain a value between zero and one representing the diffuse illumination level. ‘cellShadingLevel’ contains an integer value representing the number of levels of illumination (at least two). After this calculation, ‘diffuseLevel’ should be multiplied with the value in the diffuse map. Note that other things that affect the diffuse level, such as shadows (discussed below) or attenuation, should be calculated before this stage.

CelLevels

Wraparound lighting

I’m sure I’m not the first person to think of this feature, because it’s very simple, but I haven’t heard it given a name (perhaps for the same reason). Normal dot lighting is of this form:

 float diffuseLevel= clamp(dot(normal, lightVec), 0.0, 1.0);

In this scenario, the surface normal and the unit vector pointing in the direction of the light source are dotted together. This gives a result of 1.0 if the surface is facing directly at the light source, and a result of 0.0 if the surface normal is at right angles to the light rays. However, it also gives a result of -1.0 if the surface is facing directly away from the light source. Physically speaking, surfaces facing away from the light source should all be equally dark, and as they start facing the light source more and more, they become lighter – this is why the value is clamped to be between zero and one.

I don’t have any software handy for generating pretty graphs, but to visualise this, draw a graph of ‘y = cos (x)’ with x between 0 and 180 (degrees). If X is the angle between the surface normal and the light ray, Y is the light level. You’ll notice that from x=90 all the way up to x=180, the lighting value is negative (and thus gets clamped to zero).

However, if you don’t care about your lighting being a bit physically unsound, you can make use of the entire range of angles between 0 and 180 degrees. This makes the lighting wrap around the sides of the object, which can be useful for faking larger light sources (such as the sky while the sun is behind a cloud). For Sprite Lamp, I’ve created a special value called ‘lightWrap’ that varies between zero and one. Zero means normal diffuse lighting, and one means that every surface gets at least some light unless it is facing in the exact opposite direction to the light source. To visualise that second scenario, draw a graph of ‘y = (cos (x) + 1) / 2’. The implementation of this in the Sprite Lamp shader looks like this:

float diffuseLevel = 
     clamp(dot(normal, lightVec) + lightWrap, 0.0, lightWrap + 1.0)
            / (lightWrap + 1.0);

I find that light wrapping can look pretty weird if it gets higher than about 0.5, but of course the easiest way to play with the value is with the slider labelled ‘Wrap-around lighting’ in the Sprite Lamp preview window.

LightWrapping

Hemispheric ambience

This is a ludicrously simple technique that can get you much better value out of your ambient lighting levels. Perhaps everyone out there in industry is doing this already, but for those unfamiliar, I’ll go through this quickly. Instead of specifying a single ambient light colour, with hemispheric ambience you specify two – an above light colour and a below light colour. Usually the above light colour would be a bit lighter, and perhaps a slightly different colour, but it depends on the environment. Then, you simply mix between them based on the y component of your world space normal. In Sprite Lamp’s shader code it looks like this:

float upFactor = normal.y * 0.5 + 0.5;
vec3 ambientResult = ambientLightColour * upFactor + 
                     ambientLightColour2 * (1.0 - upFactor);

In this scenario ‘ambientLightcolour’ is the above light, and ‘ambientLightColour2’ is the below light colour. ‘upFactor’ is basically the extent to which the normal is facing up. Note that if you aren’t rotating the object in question, ‘upFactor’ can simply be replaced with the green channel of your normal map, which makes things easier still.

Incidentally, if you’re also making use of ambient occlusion maps, you should get the colour of the AO map, mix it with some amount of white (depending on how intense you want the effect), and multiply the result with ‘ambientResult’ for the final ambient light colour.

Self-shadowing

This is the only effect here that is at all hard on the graphics card, because there’s a loop and a lot of texture lookups involved. It’s also mildly hacky in the way it softens the shadows, and in the current version of the shader included with Sprite Lamp, it includes a few magic numbers. Hopefully I can explain it in a way that makes sense.

float thisHeight = fragPos.z;
vec3 tapPos = vec3(centredTexCoords, fragPos.z + 0.01);
vec3 moveVec = lightVec.xyz * vec3(1.0, -1.0, 1.0) * 0.006;
moveVec.xy *= rotationMatrix;
moveVec.x *= textureResolution.y / textureResolution.x;
for (int i = 0; i < 20; i++)
{
   tapPos += moveVec;
   float tapDepth = 
             texture2D(depthMap, tapPos.xy).x * amplifyDepth;
   if (tapDepth > tapPos.z)
   {
      shadowMult -= 0.125;
   }
}
shadowMult = clamp(shadowMult, 0.0, 1.0);

Essentially what this is doing is tracing a dotted line from the fragment position towards the light source, and for each dot, checking if that point is inside the object (that is, beneath the depth map). Traditionally, if any point is inside the depth map that would mean the ray has hit an object and thus this fragment is in shadow. However, to fake soft shadows, I darken the fragment more as more points are inside the objects. This is completely nonphysical, but I found that it looks pretty alright. At the end, you have a value called ‘shadowMult’ that you multiply against other lighting values to darken them appropriately.

I’ll try to break this up so it makes a bit more sense. Note that fragPos has already been initialised to have value in it – it represents the world position of this fragment. The x and y values here are calculated from the actual position, but the z value is taken from the depth map.

We initialise the value ‘thisHeight’ to the z value of fragPos. ‘thisHeight’ is going to record the position of the current tap (dots along the dotted line) that we’re up to. ‘tapPos’ refers to the position in the texture that we’re looking at, and ‘moveVec’ is the vector that we increment ‘tapPos’ by to get to the next dot in the line. We need to rotate the x and y values of ‘moveVec’, and multiply the y value by the texel aspect ratio too. Finally, I’ll note that ‘moveVec’ is multiplied by a magic number, in this case 0.006, to determine the step size of the ray trace. Setting this to a higher value will make it possible to cast longer shadows, but at the cost of sometimes missing shadows cast by narrower obstacles.

We then go through the loop an arbitrary number of times (in this case, 20). The more iterations, the more expensive the shader, but also the longer the cast shadows can be.

Each step through the loop starts by moving ‘tapPos’ along the ray by adding ‘moveVec’ to it. We then obtain a value, ‘tapDepth’, by looking up into the depth texture at this position. Finally, we compare ‘tapDepth’ against the z component of ‘tapPos’ – this is checking whether this tap is inside the object or not. If it is, we subtract a value from ‘shadowMult’. The value that gets subtracted is another magic number – in this case 0.125. This number controls the softness of the shadows. At 0.125, it takes 8 or more taps inside the object to completely darken a fragment. If we set this value to 1.0, it would instead make the shadows completely hard (black or white). Lowering this value would make the shadows fuzzier still. At some point I’ll make these magic numbers tweakable from the Sprite Lamp UI.

Conclusion

Hopefully this will serve as a launching point for some of the more enthusiastic and experienced Sprite Lamp users who are wanting to implement the Sprite Lamp shaders in the engine they’re using. I’m not quite satisfied that the self-shadowing system here makes a lot of sense – revisiting it now has reminded me that it was somewhat hacked together when I first made it, and I never got around to cleaning it up. However, it’s hard to know which parts of an article like this need more detail and which parts don’t, so please feel free to get in touch and ask for clarification, and I’ll do my best to update the article.

Tech demo: Complete hand drawn scene with shadows

Today I’m going to talk a bit about a tech demo I was working on recently. It’s Sprite Lamp-related (at time of writing, Sprite Lamp has three days left to reach its next stretch goal, by the way). This is part of a series of blog posts about more advanced techniques that fall into the category of “hand drawing and dynamic lighting” – some make heavy use of Sprite Lamp, others are merely suggestions for things to use in conjunction with Sprite Lamp. Just to be clear, the following examples make use of Sprite Lamp, but they are not possible using only Sprite Lamp. This is an example of something you can achieve with a dedicated graphics programmer on board. So yeah, basically, don’t support/buy Sprite Lamp because you want this effect in your game, unless you’re willing to do a lot of the programming tasks described here.

With that out of the way, the project was Project Lonsdale, and it was an attempt to create a full hand drawn scene with full dynamic lighting and shadowing. The plan was to put it to use in a point-and-click adventure game, and perhaps one day that’s what will happen. I’m showing this off now as a demonstration of some of the deeper uses of Sprite Lamp – this isn’t currently planned as a product launch, it was just a thing I played around with a few months ago.

[youtube=http://youtu.be/uV0wGBylkXs]

So there you have it. Explaining how this was all done will take ages and I don’t want to get too distracted from working on Sprite Lamp, so I’m going to do it in parts. Today I’m going to give an overview of what’s involved in creating the scene, both in terms of art and in terms of processing. Next time, I’ll give an overview of what goes into rendering a frame. Then over time, I’ll write more posts that flesh out bits of these overviews. Hopefully in the end, I’ll have a detailed technical write up.

Note that this tech is currently abandoned (though I may one day pick it up again), and that’s largely because the pipeline was just too involved for Halley and me to realistically create a whole game with. It’s really convoluted, seriously. Fully developing Sprite Lamp has made me realise a few ways in which it could be improved – what I’m describing here is the process, as far as we got.

Creation of a scene

I mentioned in passing that this involves not just Sprite Lamp, but also a bit of 3D modelling and quite a lot of programming. Here’s a quick rundown of what was involved.

1. Make a 3D model of the scene. This was actually a really easy step, because the 3D model didn’t have to be super sophisticated. In fact, for the first scene in the video, I (Finn, the programmer, and not a skilled artist at all) did the modelling in about a half hour. We really just couldn’t think of a good way to do proper shadow casting without some kind of mesh. Alas, the dream of not having to use modelling software at all dies. Here is what our mesh looked like at that point:

Viewed from a completely different angle as the rest of the scenes, of course.
Viewed from a completely different angle as the rest of the scenes, of course.

2. Load the mesh into the Lonsdale tool, and manually position the camera where you want it. This is for framing the image. Then, you export a bunch of stuff. Lonsdale at this point rendered out a 16-bit depth map, as well as a very basic preview render to be used as a guide for the artist.

The red channel is the first eight bits of a sixteen bit integer representing depth - the green channel is the other eight bits.
The red channel is the first eight bits of a sixteen bit integer representing depth – the green channel is the other eight bits.

3. Now it’s the artist’s turn. Halley would draw a whole bunch of stuff at this point. She’d draw the lighting images, as per Sprite Lamp’s requirements – the whole scene, approximately matched up to the render of the mesh, lit from all directions. Details that don’t need to cast shadows (such as textures on a surface, embossed lettering, etc) get added here. These would go into (the early version of the thing that eventually became) Sprite Lamp to create a normal map. Meanwhile, she would also paint a diffuse and a specularity map. Finally, she would draw something called a silhouette map. This is important, because it is simply a greyscale image where distinct edges define the outlines of the objects she’s drawn in the scene. There is always going to be a disparity between the rendered depth map and the hand-drawn everything else maps – the silhouette map helps resolve this disparity. More on that later.

The normal map of the scene - the most important part, probably.
The normal map of the scene – the most important part, probably.
Different shades represent different objects - designed to be easy to do an edge detection on.
Different shades represent different objects – designed to be easy to do an edge detection on.
As though the scene was lit perfectly evenly.
As though the scene was lit perfectly evenly.

4. And, back to the Lonsdale tool. This would do an edge detection on the silhouette map to figure out the shapes of Halley’s hand-drawn objects, and then jigger around with the rendered depth map to create a ‘fixed’ depth map. It would then combine the fixed depth map (red and green channels) with the silhouette map (blue channel), because the silhouette map is used in the shadowing algorithm. Now we have the depth/silhouette map, the normal map, the diffuse map, and the specularity/glossiness map. Not to mention the mesh. Everything we need to start rendering.

The final result.
The final result.

Coming up next time… rendering the scene

This is almost as involved as preparing the assets, but it does contain a cool screen-space shadow blur that I think might have some applications in more traditional rendering, too. Stay tuned.

Extended depth map features – goal met!

Over at  Sprite Lamp’s Kickstarter campaign, we have just hit our next stretch goal at 20k! This is pretty awesome. This means extended depth map features, so it’s time to give a big rundown of how this will work. I mentioned two aspects of this – they are depth map editing from within Sprite Lamp, and exporting meshes that are based around the depth map.

Depth map editing

Currently, Sprite Lamp is able to generate a depth map from a normal map. This feature is most visible in the examples I’ve posted in the form of self-shadowing, particularly this image of a stone wall, which I’m going to post again because I love it so much:

Stonewall_TwoProfiles

If you look closely, Sprite Lamp has caught the depth quite accurately, right down to the little notches in the stone, and the bits where there are weird mortar shapes left on. Feel free to inspect the detail:

stonewall2_DepthNew

While it’s important to remember that there’s no such thing as a ‘correct’ depth map based on a drawing, I think the above image is pretty good. In general, I’m pretty pleased with Sprite Lamp’s algorithm/system for generating a depth map from a normal map. This is a pretty difficult problem, made worse by the fact that you get a physically imperfect normal map when it’s drawn by hand. Textures like the one above are, fortunately, a pretty good case for this application. Alas, not everything can be a best case scenario. I won’t go into too much detail here, but the worst case scenarios for Sprite Lamp’s depth generation are all about discontinuities in the depth map. They happen when what you draw has one object passing in front of another – it’s hard for Sprite Lamp to guess how far in front. This doesn’t happen much with textures (like the stone wall, above) but it does happen with character art. Sprite Lamp gets pretty good results here too – good enough for some nice self-shadowing effects, as I think is evidenced by the sample art on the main page and in the Kickstarter.

However! With algorithms like these that involve guessing what the artist intended, there are always times where you guess wrong. For that reason, I’d like to give the user the ability to mess with Sprite Lamp’s ‘interpretation’ manually a little bit. Now, naturally, I’m not going to offload all the dirty work onto the user – if I was going to do that you might as well just draw it by hand. The goal is small and intelligent tweaks.

With that in mind, I have a few features that I’m going to experiment with to make working with depth maps better. I’m not saying these will all make it into the final program – these are what I’ll try, but I don’t yet know what will be worth including and what will end up being a waste of time.

  • Edge detection: Sprite Lamp will go through the lighting profiles and try to figure out where depth discontinuities might be.
  • Silhouette maps: This is a plan I have to have the artist draw a simple map that will be deliberately easy for Sprite Lamp to detect the edges of. May or may not end up being necessary, but will mostly be trivial to create because it comes from mask layers that come from the drawing process anyway.
  • Edge-aware soft selection: The first two points here are ways of figuring out where depth discontinuities are – this is useful information to have, because it allows the user to easily select areas of the depth map without going ‘outside the lines’. If you have a foreground object and a background object, this will allow you to grab the depth values of the foreground object and move it back and forth without interfering with adjacent pixels (which are the depth values of another object entirely).
  • Spring systems representing depth values: This will allow the user to drag a single depth pixel forward and back, and have adjacent pixels come with to a greater or lesser extent. There won’t be springs connecting pixels along discontinuity edges.
  • Curve editor to do an image-wide readjustment of values: This is something similar to how colour readjustment works in programs like Photoshop, and it could indeed be done in those programs – however, I think it might be worth including it in Sprite Lamp because getting immediate feedback on how it looks will make things a lot easier to deal with.

So, that’s that. I’m looking forward to playing around with these things, and of course I’ll keep you posted as to how it all goes. As for the other part of this stretch goal…

Mesh Exporting

The other part of this stretch goal is to add the ability to export meshes. This doesn’t take quite as much explaining as the other feature, though it will take a bit of work on my part. The goal here is to export an intelligently-created mesh that takes the shape of the object according to the depth map. Naturally, the easy approach to this is to simply subdivide a quad into many quads, then displace the vertices according to the depth map. I’m hoping I can do quite a bit better than that, by placing vertices in important places along ridges and points, and making the geometry more sparse in flatter areas. The user will have a slider that allows them to adjust the target vert count of the generated mesh. This has a couple of concrete uses:

  • Stereographic rendering without relying on complex shaders: Naturally, I quite enjoy complex shaders, but there are times when it’s easier to just use geometry. This can save you from various annoying consequences of displacement shaders, such as correctly handling extreme angles.
  • Correct rendering into depth buffers for ‘true’ shadowing: I’m going to be writing a lot about the various sneaky tricks you can use to fake shadows using depth maps – however, there are certain shadowing algorithms where it’s just nicer to do things with geometry. Having geometry at your disposal gives you more options for plugging directly into existing ‘general’ shadowing options, such as shadow mapping and CSMs, which are well-documented already and amply implemented in existing engines.
  • As a base for further tessellation/displacement: The user will be able to determine how strong the vertex displacement will be, and that will include zero – that means that you can use the flat mesh as a basis for more effective GPU-based tessellation in the GPU, and displace generated verts based on the depth map in a shader.

In addition to these, though, I’m looking forward to see what else people come up with using this feature. I’m certainly planning on making some 3D prints of artwork once I get it done!

One last thing I’ll say on this subject is that I haven’t settled on formats for exporting of meshes. I’ll probably make use of the dubiously-named AssImp (short for ‘Asset Importer’, obviously), which can export to a handful of formats, in accordance with this list. However, if anyone knows of any straightforward mesh exporting libraries for C# you think would be better, I’m all ears.