Category Archives: Programming

Sprite Lamp’s basic shader

So, now that I’m back from Christmas/New Year’s festivities, the first order of the day is to do a proper write up of the basic shader from Sprite Lamp’s preview window. I thought it would be good to get this out there early on, because people who are keen to get cracking with a version of it for an engine of their choice can do so. I’ll eventually be doing an implementation for major engines, but of course I can’t cover all engines myself, and I’m betting some people won’t want to wait for me anyway. With the exception of the self-shadowing, none of this is very complicated – if you’re advanced enough to get a normal mapping shader going, this should be well within reach.

Someone suggested that I make a page to collect links to people who have implemented these shaders in various game engines, and I think that’s a pretty great idea, so if you’re working on such an implementation, get in touch! I’ll be putting a page up here soon with that information, and eventually I’ll add my own integrations there when they get done.

First off, if you backed Sprite Lamp (which you can still do via PayPal, by the way), you already have the shaders in plain text. They might have some dodgy commented out code, and they’re not well documented, but they’re there in the ‘Shaders’ directory. The one I’m going to talk about for the moment is StandardPhong.fsd. As the name suggests, fundamentally this is based on the Phong illumination model (not to be confused with Phong shading).

I won’t talk too much about Phong illumination because it’s pretty common and well-documented elsewhere (including at that wikipedia link). Fundamentally, it consists of ambient lighting (pretty much just a constant value), diffuse lighting (which is calculated as the dot product of the light direction and the surface normal), and specular lighting (which is slightly too complicated to sum up in a sentence, but there’s more detail at the wikipedia page). I’m going to go over the modifications I’ve made in the Sprite Lamp version.

Cel shading

This is a fairly simple trick – the more complex palette-driven cel shading will be a subject of another blog, once the tech is finalised. Simply put, this applies a step pattern to the illumination level of the pixel. The relevant piece of code is this:

diffuseLevel *= cellShadingLevel;
diffuseLevel = floor(diffuseLevel);
diffuseLevel /= cellShadingLevel - 0.5;

By this point in the shader, ‘diffuseLevel’ should have been calculated to contain a value between zero and one representing the diffuse illumination level. ‘cellShadingLevel’ contains an integer value representing the number of levels of illumination (at least two). After this calculation, ‘diffuseLevel’ should be multiplied with the value in the diffuse map. Note that other things that affect the diffuse level, such as shadows (discussed below) or attenuation, should be calculated before this stage.

CelLevels

Wraparound lighting

I’m sure I’m not the first person to think of this feature, because it’s very simple, but I haven’t heard it given a name (perhaps for the same reason). Normal dot lighting is of this form:

 float diffuseLevel= clamp(dot(normal, lightVec), 0.0, 1.0);

In this scenario, the surface normal and the unit vector pointing in the direction of the light source are dotted together. This gives a result of 1.0 if the surface is facing directly at the light source, and a result of 0.0 if the surface normal is at right angles to the light rays. However, it also gives a result of -1.0 if the surface is facing directly away from the light source. Physically speaking, surfaces facing away from the light source should all be equally dark, and as they start facing the light source more and more, they become lighter – this is why the value is clamped to be between zero and one.

I don’t have any software handy for generating pretty graphs, but to visualise this, draw a graph of ‘y = cos (x)’ with x between 0 and 180 (degrees). If X is the angle between the surface normal and the light ray, Y is the light level. You’ll notice that from x=90 all the way up to x=180, the lighting value is negative (and thus gets clamped to zero).

However, if you don’t care about your lighting being a bit physically unsound, you can make use of the entire range of angles between 0 and 180 degrees. This makes the lighting wrap around the sides of the object, which can be useful for faking larger light sources (such as the sky while the sun is behind a cloud). For Sprite Lamp, I’ve created a special value called ‘lightWrap’ that varies between zero and one. Zero means normal diffuse lighting, and one means that every surface gets at least some light unless it is facing in the exact opposite direction to the light source. To visualise that second scenario, draw a graph of ‘y = (cos (x) + 1) / 2’. The implementation of this in the Sprite Lamp shader looks like this:

float diffuseLevel = 
     clamp(dot(normal, lightVec) + lightWrap, 0.0, lightWrap + 1.0)
            / (lightWrap + 1.0);

I find that light wrapping can look pretty weird if it gets higher than about 0.5, but of course the easiest way to play with the value is with the slider labelled ‘Wrap-around lighting’ in the Sprite Lamp preview window.

LightWrapping

Hemispheric ambience

This is a ludicrously simple technique that can get you much better value out of your ambient lighting levels. Perhaps everyone out there in industry is doing this already, but for those unfamiliar, I’ll go through this quickly. Instead of specifying a single ambient light colour, with hemispheric ambience you specify two – an above light colour and a below light colour. Usually the above light colour would be a bit lighter, and perhaps a slightly different colour, but it depends on the environment. Then, you simply mix between them based on the y component of your world space normal. In Sprite Lamp’s shader code it looks like this:

float upFactor = normal.y * 0.5 + 0.5;
vec3 ambientResult = ambientLightColour * upFactor + 
                     ambientLightColour2 * (1.0 - upFactor);

In this scenario ‘ambientLightcolour’ is the above light, and ‘ambientLightColour2’ is the below light colour. ‘upFactor’ is basically the extent to which the normal is facing up. Note that if you aren’t rotating the object in question, ‘upFactor’ can simply be replaced with the green channel of your normal map, which makes things easier still.

Incidentally, if you’re also making use of ambient occlusion maps, you should get the colour of the AO map, mix it with some amount of white (depending on how intense you want the effect), and multiply the result with ‘ambientResult’ for the final ambient light colour.

Self-shadowing

This is the only effect here that is at all hard on the graphics card, because there’s a loop and a lot of texture lookups involved. It’s also mildly hacky in the way it softens the shadows, and in the current version of the shader included with Sprite Lamp, it includes a few magic numbers. Hopefully I can explain it in a way that makes sense.

float thisHeight = fragPos.z;
vec3 tapPos = vec3(centredTexCoords, fragPos.z + 0.01);
vec3 moveVec = lightVec.xyz * vec3(1.0, -1.0, 1.0) * 0.006;
moveVec.xy *= rotationMatrix;
moveVec.x *= textureResolution.y / textureResolution.x;
for (int i = 0; i < 20; i++)
{
   tapPos += moveVec;
   float tapDepth = 
             texture2D(depthMap, tapPos.xy).x * amplifyDepth;
   if (tapDepth > tapPos.z)
   {
      shadowMult -= 0.125;
   }
}
shadowMult = clamp(shadowMult, 0.0, 1.0);

Essentially what this is doing is tracing a dotted line from the fragment position towards the light source, and for each dot, checking if that point is inside the object (that is, beneath the depth map). Traditionally, if any point is inside the depth map that would mean the ray has hit an object and thus this fragment is in shadow. However, to fake soft shadows, I darken the fragment more as more points are inside the objects. This is completely nonphysical, but I found that it looks pretty alright. At the end, you have a value called ‘shadowMult’ that you multiply against other lighting values to darken them appropriately.

I’ll try to break this up so it makes a bit more sense. Note that fragPos has already been initialised to have value in it – it represents the world position of this fragment. The x and y values here are calculated from the actual position, but the z value is taken from the depth map.

We initialise the value ‘thisHeight’ to the z value of fragPos. ‘thisHeight’ is going to record the position of the current tap (dots along the dotted line) that we’re up to. ‘tapPos’ refers to the position in the texture that we’re looking at, and ‘moveVec’ is the vector that we increment ‘tapPos’ by to get to the next dot in the line. We need to rotate the x and y values of ‘moveVec’, and multiply the y value by the texel aspect ratio too. Finally, I’ll note that ‘moveVec’ is multiplied by a magic number, in this case 0.006, to determine the step size of the ray trace. Setting this to a higher value will make it possible to cast longer shadows, but at the cost of sometimes missing shadows cast by narrower obstacles.

We then go through the loop an arbitrary number of times (in this case, 20). The more iterations, the more expensive the shader, but also the longer the cast shadows can be.

Each step through the loop starts by moving ‘tapPos’ along the ray by adding ‘moveVec’ to it. We then obtain a value, ‘tapDepth’, by looking up into the depth texture at this position. Finally, we compare ‘tapDepth’ against the z component of ‘tapPos’ – this is checking whether this tap is inside the object or not. If it is, we subtract a value from ‘shadowMult’. The value that gets subtracted is another magic number – in this case 0.125. This number controls the softness of the shadows. At 0.125, it takes 8 or more taps inside the object to completely darken a fragment. If we set this value to 1.0, it would instead make the shadows completely hard (black or white). Lowering this value would make the shadows fuzzier still. At some point I’ll make these magic numbers tweakable from the Sprite Lamp UI.

Conclusion

Hopefully this will serve as a launching point for some of the more enthusiastic and experienced Sprite Lamp users who are wanting to implement the Sprite Lamp shaders in the engine they’re using. I’m not quite satisfied that the self-shadowing system here makes a lot of sense – revisiting it now has reminded me that it was somewhat hacked together when I first made it, and I never got around to cleaning it up. However, it’s hard to know which parts of an article like this need more detail and which parts don’t, so please feel free to get in touch and ask for clarification, and I’ll do my best to update the article.

Tech demo: Complete hand drawn scene with shadows

Today I’m going to talk a bit about a tech demo I was working on recently. It’s Sprite Lamp-related (at time of writing, Sprite Lamp has three days left to reach its next stretch goal, by the way). This is part of a series of blog posts about more advanced techniques that fall into the category of “hand drawing and dynamic lighting” – some make heavy use of Sprite Lamp, others are merely suggestions for things to use in conjunction with Sprite Lamp. Just to be clear, the following examples make use of Sprite Lamp, but they are not possible using only Sprite Lamp. This is an example of something you can achieve with a dedicated graphics programmer on board. So yeah, basically, don’t support/buy Sprite Lamp because you want this effect in your game, unless you’re willing to do a lot of the programming tasks described here.

With that out of the way, the project was Project Lonsdale, and it was an attempt to create a full hand drawn scene with full dynamic lighting and shadowing. The plan was to put it to use in a point-and-click adventure game, and perhaps one day that’s what will happen. I’m showing this off now as a demonstration of some of the deeper uses of Sprite Lamp – this isn’t currently planned as a product launch, it was just a thing I played around with a few months ago.

[youtube=http://youtu.be/uV0wGBylkXs]

So there you have it. Explaining how this was all done will take ages and I don’t want to get too distracted from working on Sprite Lamp, so I’m going to do it in parts. Today I’m going to give an overview of what’s involved in creating the scene, both in terms of art and in terms of processing. Next time, I’ll give an overview of what goes into rendering a frame. Then over time, I’ll write more posts that flesh out bits of these overviews. Hopefully in the end, I’ll have a detailed technical write up.

Note that this tech is currently abandoned (though I may one day pick it up again), and that’s largely because the pipeline was just too involved for Halley and me to realistically create a whole game with. It’s really convoluted, seriously. Fully developing Sprite Lamp has made me realise a few ways in which it could be improved – what I’m describing here is the process, as far as we got.

Creation of a scene

I mentioned in passing that this involves not just Sprite Lamp, but also a bit of 3D modelling and quite a lot of programming. Here’s a quick rundown of what was involved.

1. Make a 3D model of the scene. This was actually a really easy step, because the 3D model didn’t have to be super sophisticated. In fact, for the first scene in the video, I (Finn, the programmer, and not a skilled artist at all) did the modelling in about a half hour. We really just couldn’t think of a good way to do proper shadow casting without some kind of mesh. Alas, the dream of not having to use modelling software at all dies. Here is what our mesh looked like at that point:

Viewed from a completely different angle as the rest of the scenes, of course.
Viewed from a completely different angle as the rest of the scenes, of course.

2. Load the mesh into the Lonsdale tool, and manually position the camera where you want it. This is for framing the image. Then, you export a bunch of stuff. Lonsdale at this point rendered out a 16-bit depth map, as well as a very basic preview render to be used as a guide for the artist.

The red channel is the first eight bits of a sixteen bit integer representing depth - the green channel is the other eight bits.
The red channel is the first eight bits of a sixteen bit integer representing depth – the green channel is the other eight bits.

3. Now it’s the artist’s turn. Halley would draw a whole bunch of stuff at this point. She’d draw the lighting images, as per Sprite Lamp’s requirements – the whole scene, approximately matched up to the render of the mesh, lit from all directions. Details that don’t need to cast shadows (such as textures on a surface, embossed lettering, etc) get added here. These would go into (the early version of the thing that eventually became) Sprite Lamp to create a normal map. Meanwhile, she would also paint a diffuse and a specularity map. Finally, she would draw something called a silhouette map. This is important, because it is simply a greyscale image where distinct edges define the outlines of the objects she’s drawn in the scene. There is always going to be a disparity between the rendered depth map and the hand-drawn everything else maps – the silhouette map helps resolve this disparity. More on that later.

The normal map of the scene - the most important part, probably.
The normal map of the scene – the most important part, probably.
Different shades represent different objects - designed to be easy to do an edge detection on.
Different shades represent different objects – designed to be easy to do an edge detection on.
As though the scene was lit perfectly evenly.
As though the scene was lit perfectly evenly.

4. And, back to the Lonsdale tool. This would do an edge detection on the silhouette map to figure out the shapes of Halley’s hand-drawn objects, and then jigger around with the rendered depth map to create a ‘fixed’ depth map. It would then combine the fixed depth map (red and green channels) with the silhouette map (blue channel), because the silhouette map is used in the shadowing algorithm. Now we have the depth/silhouette map, the normal map, the diffuse map, and the specularity/glossiness map. Not to mention the mesh. Everything we need to start rendering.

The final result.
The final result.

Coming up next time… rendering the scene

This is almost as involved as preparing the assets, but it does contain a cool screen-space shadow blur that I think might have some applications in more traditional rendering, too. Stay tuned.

Extended depth map features – goal met!

Over at  Sprite Lamp’s Kickstarter campaign, we have just hit our next stretch goal at 20k! This is pretty awesome. This means extended depth map features, so it’s time to give a big rundown of how this will work. I mentioned two aspects of this – they are depth map editing from within Sprite Lamp, and exporting meshes that are based around the depth map.

Depth map editing

Currently, Sprite Lamp is able to generate a depth map from a normal map. This feature is most visible in the examples I’ve posted in the form of self-shadowing, particularly this image of a stone wall, which I’m going to post again because I love it so much:

Stonewall_TwoProfiles

If you look closely, Sprite Lamp has caught the depth quite accurately, right down to the little notches in the stone, and the bits where there are weird mortar shapes left on. Feel free to inspect the detail:

stonewall2_DepthNew

While it’s important to remember that there’s no such thing as a ‘correct’ depth map based on a drawing, I think the above image is pretty good. In general, I’m pretty pleased with Sprite Lamp’s algorithm/system for generating a depth map from a normal map. This is a pretty difficult problem, made worse by the fact that you get a physically imperfect normal map when it’s drawn by hand. Textures like the one above are, fortunately, a pretty good case for this application. Alas, not everything can be a best case scenario. I won’t go into too much detail here, but the worst case scenarios for Sprite Lamp’s depth generation are all about discontinuities in the depth map. They happen when what you draw has one object passing in front of another – it’s hard for Sprite Lamp to guess how far in front. This doesn’t happen much with textures (like the stone wall, above) but it does happen with character art. Sprite Lamp gets pretty good results here too – good enough for some nice self-shadowing effects, as I think is evidenced by the sample art on the main page and in the Kickstarter.

However! With algorithms like these that involve guessing what the artist intended, there are always times where you guess wrong. For that reason, I’d like to give the user the ability to mess with Sprite Lamp’s ‘interpretation’ manually a little bit. Now, naturally, I’m not going to offload all the dirty work onto the user – if I was going to do that you might as well just draw it by hand. The goal is small and intelligent tweaks.

With that in mind, I have a few features that I’m going to experiment with to make working with depth maps better. I’m not saying these will all make it into the final program – these are what I’ll try, but I don’t yet know what will be worth including and what will end up being a waste of time.

  • Edge detection: Sprite Lamp will go through the lighting profiles and try to figure out where depth discontinuities might be.
  • Silhouette maps: This is a plan I have to have the artist draw a simple map that will be deliberately easy for Sprite Lamp to detect the edges of. May or may not end up being necessary, but will mostly be trivial to create because it comes from mask layers that come from the drawing process anyway.
  • Edge-aware soft selection: The first two points here are ways of figuring out where depth discontinuities are – this is useful information to have, because it allows the user to easily select areas of the depth map without going ‘outside the lines’. If you have a foreground object and a background object, this will allow you to grab the depth values of the foreground object and move it back and forth without interfering with adjacent pixels (which are the depth values of another object entirely).
  • Spring systems representing depth values: This will allow the user to drag a single depth pixel forward and back, and have adjacent pixels come with to a greater or lesser extent. There won’t be springs connecting pixels along discontinuity edges.
  • Curve editor to do an image-wide readjustment of values: This is something similar to how colour readjustment works in programs like Photoshop, and it could indeed be done in those programs – however, I think it might be worth including it in Sprite Lamp because getting immediate feedback on how it looks will make things a lot easier to deal with.

So, that’s that. I’m looking forward to playing around with these things, and of course I’ll keep you posted as to how it all goes. As for the other part of this stretch goal…

Mesh Exporting

The other part of this stretch goal is to add the ability to export meshes. This doesn’t take quite as much explaining as the other feature, though it will take a bit of work on my part. The goal here is to export an intelligently-created mesh that takes the shape of the object according to the depth map. Naturally, the easy approach to this is to simply subdivide a quad into many quads, then displace the vertices according to the depth map. I’m hoping I can do quite a bit better than that, by placing vertices in important places along ridges and points, and making the geometry more sparse in flatter areas. The user will have a slider that allows them to adjust the target vert count of the generated mesh. This has a couple of concrete uses:

  • Stereographic rendering without relying on complex shaders: Naturally, I quite enjoy complex shaders, but there are times when it’s easier to just use geometry. This can save you from various annoying consequences of displacement shaders, such as correctly handling extreme angles.
  • Correct rendering into depth buffers for ‘true’ shadowing: I’m going to be writing a lot about the various sneaky tricks you can use to fake shadows using depth maps – however, there are certain shadowing algorithms where it’s just nicer to do things with geometry. Having geometry at your disposal gives you more options for plugging directly into existing ‘general’ shadowing options, such as shadow mapping and CSMs, which are well-documented already and amply implemented in existing engines.
  • As a base for further tessellation/displacement: The user will be able to determine how strong the vertex displacement will be, and that will include zero – that means that you can use the flat mesh as a basis for more effective GPU-based tessellation in the GPU, and displace generated verts based on the depth map in a shader.

In addition to these, though, I’m looking forward to see what else people come up with using this feature. I’m certainly planning on making some 3D prints of artwork once I get it done!

One last thing I’ll say on this subject is that I haven’t settled on formats for exporting of meshes. I’ll probably make use of the dubiously-named AssImp (short for ‘Asset Importer’, obviously), which can export to a handful of formats, in accordance with this list. However, if anyone knows of any straightforward mesh exporting libraries for C# you think would be better, I’m all ears.

Returning from hiatus with Sprite Lamp

Okay so, I’ve been pretty absent around these parts. To some extent that’s because I’m a socially backwards hermit who hates the internet, but it’s also because I’ve been working on something that I haven’t gone public with yet.  Well, now I have, and that something is called Sprite Lamp.

Unlike everything else I’ve ever worked on or thought about working on, Sprite Lamp is not a game but a tool (for games). It’s all about combining dynamic lighting with 2D art. I think it’s kind of cool. You can read some details about Sprite Lamp or just enjoy this picture of the logo for it, at your leisure. I will be talking a bit more about this kind of stuff on social media, or at least trying to, in the near future, so that’s something to look forward to, too.

I’m going to be going all Kickstarter with this thing as soon as Kickstarter lets Australian projects go live, which is November the 13th. Wish me luck.

Sprite-Banner-big

Messing up FRAPS – a warning

Well here’s an interesting thing that I haven’t gotten to the bottom of yet.

I was just recently putting together an entry for a game competition. Since I like to mess with shaders and other odd graphics card things, examples of that activity needed to be collected for the submission. Among them was a thing called the Cave Demo – basically the idea is that it was to be a game where you are exploring a procedurally generated cave, but unlike most such caves, this one has its generation parameters tweaked on the fly, causing it to shift around all crazy-like. Unfortunately, I’m not convinced that this was going to become a good game, so for the moment I’ve stopped working on it – it did result in some really cool -looking things though, at least to my tastes.

As it turns out, rendering a cave each frame is pretty easy – you render a bunch of stuff into a greyscale image, with the map representing some kind of density function, then by rendering with that image as your punchthrough texture you do a sort of automated Marching Squares thing, and bam, you’ve got a cave. You can render greyscale images into the cave buffer additively to get various effects like platforms appearing and disappearing, that kind of thing. It’s neat. From there it’s just a matter of interpreting the density map in a more complicated shader to generate a more impressive looking cave. Obviously this is a really ineffective explanation because how this was all done is maybe a blog post for another day.

Anyway, the key to this is that that’s actually not all that’s required if you want to have actual gameplay that interacts with the cave. This ever-changing cave will be needing functional collision detection, and for that, we need to get some or all of the cave buffer from video memory back into main memory so the CPU can deal with it. This can be annoying to set up, and it’s not the speediest operation in the world, but it’s doable and I did it and I was quite pleased with myself.

Fast forward to the other night, where I’m assembling a bunch of videos of stuff I’ve worked on to prove to people that I can make stuff. I fire up my trusty FRAPS and record what I presume to be some rad footage. I open it up in Media Player Classic and am confronted with something way way less rad than I was expecting. Rather than capturing what was actually on the screen, FRAPS has captured the contents of the cave buffer – a weird-looking blurry image about a quarter the size of the video frame, and wholly unsuitable for entering into a competition of any kind.

This was pretty soon before the competition was going to close, so I didn’t really have time to do much about it other than use a camera to film the footage and hope the resulting low quality doesn’t offend people too much. I did manage to figure out that the act of getting stuff back from the video memory to the main memory was what was doing me in. I discovered this using the time-honoured debugging technique of commenting that bit out and seeing if it fixed the problem. So that gives me something to go on, although it didn’t actually help at the time because of course getting rid of all the game’s collision detection kind of broke things. When I do figure out what’s really going on here and if there’s a way to get around it, I’ll update this post with more helpful advice.

In the meantime, don’t make the same mistake I did! There are things that can trip FRAPS up.

Faking depth values

This is going to be a short but technical post, about some information that I found surprisingly hard to find online. If your a programmer in the gaming world you’re probably pretty familiar with a depth buffer. If you’re not, you might find this post a bit confusing or tedious.

One thing you might know about depth buffers is that they don’t exactly store a simple depth value. They store a weird non-linear version of it, in a clever way to get you better depth precision closer to the camera. A good write up of the gritty details of how this works can be found at this article here called Depth Buffer – The gritty details.

Another thing you might know is that although the depth values written out are usually pretty standard (and derived from the depth of the pixel in the way described above), it’s possible to write non-standard depth values from the pixel shader. This can be useful for techniques such as imposters.

I had need recently to fake some depth values, and I was surprised that I couldn’t find the formula for converting to standard depth to depth-buffer-friendly depth. Perhaps this is because my google-fu is weak – I’m not entirely sure – but I did eventually figure out the maths required and I present it here on the off chance that someone finds it helpful.

You will need three numbers. The near and far clip distances set on your camera, and the depth of the fragment. Note that this isn’t quite the same as the distance from the camera to the fragment – it’s the distance in the direction of the camera’s forward vector (that’s what you get by multiplying the fragment’s world position by the view matrix and taking just the z component).

So to get from there to the value you write to the depth buffer, here’s what you need:

zBufferValue = (farClip / fragDistance) * ((fragDistance – nearClip) / (farClip – nearClip))

That’s it! In glsl you write that value to a built-in called gl_FragDepth – how it’s done in other shader languages is an exercise for the reader, because I have no idea.

Anyway, I realise this article falls quite a bit shy of a proper tutorial on depth buffers or imposters or whatever – maybe I’ll come back and write one of those later – but for the moment I just wanted to write down that formula. I hope somehow it is useful for someone’s particular purpose.