Category Archives: Graphics

Returning from hiatus with Sprite Lamp

Okay so, I’ve been pretty absent around these parts. To some extent that’s because I’m a socially backwards hermit who hates the internet, but it’s also because I’ve been working on something that I haven’t gone public with yet.  Well, now I have, and that something is called Sprite Lamp.

Unlike everything else I’ve ever worked on or thought about working on, Sprite Lamp is not a game but a tool (for games). It’s all about combining dynamic lighting with 2D art. I think it’s kind of cool. You can read some details about Sprite Lamp or just enjoy this picture of the logo for it, at your leisure. I will be talking a bit more about this kind of stuff on social media, or at least trying to, in the near future, so that’s something to look forward to, too.

I’m going to be going all Kickstarter with this thing as soon as Kickstarter lets Australian projects go live, which is November the 13th. Wish me luck.

Sprite-Banner-big

Messing up FRAPS – a warning

Well here’s an interesting thing that I haven’t gotten to the bottom of yet.

I was just recently putting together an entry for a game competition. Since I like to mess with shaders and other odd graphics card things, examples of that activity needed to be collected for the submission. Among them was a thing called the Cave Demo – basically the idea is that it was to be a game where you are exploring a procedurally generated cave, but unlike most such caves, this one has its generation parameters tweaked on the fly, causing it to shift around all crazy-like. Unfortunately, I’m not convinced that this was going to become a good game, so for the moment I’ve stopped working on it – it did result in some really cool -looking things though, at least to my tastes.

As it turns out, rendering a cave each frame is pretty easy – you render a bunch of stuff into a greyscale image, with the map representing some kind of density function, then by rendering with that image as your punchthrough texture you do a sort of automated Marching Squares thing, and bam, you’ve got a cave. You can render greyscale images into the cave buffer additively to get various effects like platforms appearing and disappearing, that kind of thing. It’s neat. From there it’s just a matter of interpreting the density map in a more complicated shader to generate a more impressive looking cave. Obviously this is a really ineffective explanation because how this was all done is maybe a blog post for another day.

Anyway, the key to this is that that’s actually not all that’s required if you want to have actual gameplay that interacts with the cave. This ever-changing cave will be needing functional collision detection, and for that, we need to get some or all of the cave buffer from video memory back into main memory so the CPU can deal with it. This can be annoying to set up, and it’s not the speediest operation in the world, but it’s doable and I did it and I was quite pleased with myself.

Fast forward to the other night, where I’m assembling a bunch of videos of stuff I’ve worked on to prove to people that I can make stuff. I fire up my trusty FRAPS and record what I presume to be some rad footage. I open it up in Media Player Classic and am confronted with something way way less rad than I was expecting. Rather than capturing what was actually on the screen, FRAPS has captured the contents of the cave buffer – a weird-looking blurry image about a quarter the size of the video frame, and wholly unsuitable for entering into a competition of any kind.

This was pretty soon before the competition was going to close, so I didn’t really have time to do much about it other than use a camera to film the footage and hope the resulting low quality doesn’t offend people too much. I did manage to figure out that the act of getting stuff back from the video memory to the main memory was what was doing me in. I discovered this using the time-honoured debugging technique of commenting that bit out and seeing if it fixed the problem. So that gives me something to go on, although it didn’t actually help at the time because of course getting rid of all the game’s collision detection kind of broke things. When I do figure out what’s really going on here and if there’s a way to get around it, I’ll update this post with more helpful advice.

In the meantime, don’t make the same mistake I did! There are things that can trip FRAPS up.

Faking depth values

This is going to be a short but technical post, about some information that I found surprisingly hard to find online. If your a programmer in the gaming world you’re probably pretty familiar with a depth buffer. If you’re not, you might find this post a bit confusing or tedious.

One thing you might know about depth buffers is that they don’t exactly store a simple depth value. They store a weird non-linear version of it, in a clever way to get you better depth precision closer to the camera. A good write up of the gritty details of how this works can be found at this article here called Depth Buffer – The gritty details.

Another thing you might know is that although the depth values written out are usually pretty standard (and derived from the depth of the pixel in the way described above), it’s possible to write non-standard depth values from the pixel shader. This can be useful for techniques such as imposters.

I had need recently to fake some depth values, and I was surprised that I couldn’t find the formula for converting to standard depth to depth-buffer-friendly depth. Perhaps this is because my google-fu is weak – I’m not entirely sure – but I did eventually figure out the maths required and I present it here on the off chance that someone finds it helpful.

You will need three numbers. The near and far clip distances set on your camera, and the depth of the fragment. Note that this isn’t quite the same as the distance from the camera to the fragment – it’s the distance in the direction of the camera’s forward vector (that’s what you get by multiplying the fragment’s world position by the view matrix and taking just the z component).

So to get from there to the value you write to the depth buffer, here’s what you need:

zBufferValue = (farClip / fragDistance) * ((fragDistance – nearClip) / (farClip – nearClip))

That’s it! In glsl you write that value to a built-in called gl_FragDepth – how it’s done in other shader languages is an exercise for the reader, because I have no idea.

Anyway, I realise this article falls quite a bit shy of a proper tutorial on depth buffers or imposters or whatever – maybe I’ll come back and write one of those later – but for the moment I just wanted to write down that formula. I hope somehow it is useful for someone’s particular purpose.