Monday, July 22, 2013

Distortion Effects

Hi, it's time for a super rad tech update! Last week, I mentioned something about music...well, I needed some art to really show it off correctly (Art? For music?) so I did something else this week instead: distortion effects!

This update will include a fairly technical overview of my graphics pipeline. If you'd rather just see the cool special effects, you can skip straight to the bottom and enjoy the video.

A popular graphics technique is to warp sections of the screen. This can be used for all sorts of effects, such 
as shock waves, fish-eye lenses, translucent surfaces, heat hazes, etc. Here are some examples, taken from the first few pages of a Google image search for '2d distortion shader':




Step by step, I'm going to go over how I implemented the infrastructure in the engine for Super Obelisk. For clarity, I'm just going to go over the entire graphics pipeline, including the lighting and slabs (layering) components. One nifty little trick--and this is very useful during development--is the ability to save the contents of the current OpenGL frame buffer (i.e., render surface) to a file, so we can see what's being drawn behind the scenes before the final image is presented to the user.

The following image is our final result for this example.




Note the following features:

1. The scene has multiple slabs. The player is rendered on the bottoms slab. Some blobs are on the top slab.
2. Lights from the bottom slabs don't shine to the top slab. Lights from the top slab, however, shine to the bottom slab.
3. A distortion bubble on the lower slab is obscured by the upper slab (overpass). Also, the bubble does not show the contents of the upper slab.
4. The distortion bubble on the upper slab, however, contains the entire scene, including the other bubble on the lower slab
5. Two distortions on the right side of the screen represent a pure left shift and a pure right shift. Where the sprites overlap, there is no distortion, since they cancel each other out. This is kind of hard to see without zooming way in and looking carefully at the floor tiles.

And now, how we got to this scene. The engine gathers up all render data and sorts it into separate rendering layers. In this example, there are two slabs being rendered. I've removed the HUD for simplicity. Here's what happens:

1. Lights on both slabs are rendered.

Lights are on a special layer type and are drawn with a very simple shader which converts world position to screen position via a transformation matrix. The results are stored in the lights buffer. Note that the 'clear color' is the ambient color of the section.

Because we want lights from the upper slab to shine down onto the lower one, we render all lights that are 'on or above' the current slab.

 Here is what the lights buffer looks like after this:



2. Distortion sprites on the lower slab are rendered.

Next, we'll render the distortion sprites to the distortion buffer. The shader for these is the same as the lights buffer--a very simple world-to-screen transformation. I've actually slightly modified the output here for demonstration purposes only--since an alpha value of '0' is typical for pixels (keep reading to find out why), you wouldn't even be able to see most of the image if I didn't set set the alpha to '1' for this image. The result of this rendering:




3. The scene on the lower slab is rendered and combined with the lights.

At this stage, the bulk of the rendering occurs. All remaining entities (tiles, players, enemies, particles, etc.) are rendered onto another buffer, which we'll call bufferA. The shader program for this part of the rendering uses the light buffer from step 1 as a source texture. Each pixel multiplies its output with the corresponding pixel of the light texture. Although not pictured, this shader is also responsible for hue shifts like when an enemy gets hit and flashes different colors (or when a bomb is about to explode). Here's what we have so far; keep in mind there's something fishy about how the alpha channel is saved to the file, which is why the shadows look odd in some of these pictures:



4. Distortion is applied

Now, we take the output from step 3 (bufferA, in this case) and copy it to bufferB. The copy will use the distortion map from step 2 to warp the picture. The default case, where eveything (RGBA) is 0, will just do a straight copy of the pixel (no distortion). So, on bufferB, a pixel at (25, 10) would be sampled directly from (25, 10) on bufferA.

But if the pixel isn't pure black, we apply distortion on the X and Y axis
  • Red represents a distortion to the left
  • Green represents a distortion to the right. 
  • Blue and Alpha represent a distortion up and down, respectively.
A popular distortion scheme is to use only the Red and Green channels of the texture for left and right distortion; Red values less than 0.5 (or 128 on a 256 value scale) represent a left shift, while Red values greater than 0.5 represent a right shift. Green controls the Y axis. Here is an image I found online that is used for that distortion algorithm:


The problem with this scheme is that it doesn't work well with additive blend, meaning distortions can't be overlapped with any meaningful result. For example, a solid sprite with RGBA = (0.4, 0, 0, 0) would be a slight left shift. But two of those overlapped with additive blend would be RGBA = (0.8, 0, 0, 0)--which is actually a right shift, instead of a stronger left shift!

My scheme, however, correctly intensifies the shift as sprites overlap--or cancels the shift out entirely if, say, the Red and Green components line up (which is exactly what the example image is showing with the distortion boxes!)

Anyway, here's the output of bufferB:



5. Render the lights on the next slab.

We clear the light buffer from step 1 and render the lights on the upper slab only. Lights from the lower slab do not affect the items we're about to draw!



6. Render the distortion sprites on the next slab.

We clear the distortion buffer and do the same as in step 2. Nothing fancy here. There's only one distortion sprite on the upper slab:



7. The normal scene of slab 2 is rendered onto bufferB, combined with lights from step 5.

Remember, bufferB represents the entire output of the first 4 steps. We're using the "painter's algorithm" here--we're going to draw right on top of all the other stuff we just drew. Note that we do not clear this buffer before this step, which is crucial. Here's what we have now:




8. Distortion from step 6 is applied.

Similar to step 4, we're going to apply distortion. Except this time, we're sampling from bufferB and writing to bufferA. For every slab we render, these two buffers are alternated--we need both of them, but once we're done copying from one to another, we don't need the first one any more, so we reuse it.

Note that the distortion sprites on slab 2 may sample distortion output from slab 1! So, sometimes you can see a ball reflected on another ball.



...and we're done!

So, what can this be used for? A little reflective orb here and there looks cool, but has limited usage. I put together a video of me walking through a few rooms showing different ways this effect can be used--and I've only had it for a couple days, so I haven't gotten around to discovering what other things I can do with it yet. Check out the video! Also, if you like seeing stuff like this in development--including horrible screenshots of when things go wrong--I usually put those on twitter, at https://twitter.com/ericswheeler .

Video:


Thanks for reading/watching and see you soon!



Monday, July 15, 2013

Combining Multiple Behaviors

In last week's entry, I made a note at the end that there was a limitation in the way I had designed how Behaviors are attached to Entities--namely, that I was limited to one behavior per entity. This post is about how I solved the issue.

Technically, that's fine, since Behaviors are just references to code, and can be as complicated as I want them to be. But it doesn't necessarily yield good design with respect to reusable components. Let's take an example.

Let's say I have an entity with the lit torch graphic and I attach the ThrowableItem behavior to it:



And here's another entity with the PointLight behavior attached (the PointLight behavior attaches a child entity configured as a light source,  and has a few editable parameters such as size and color).



But what if I want to pick up the torch with the light on it? One solution would be to introduce an editor property to the ThrowableItem behavior, something like IsLightSource. Then, when the behavior is initialized, I could add a light entity as a child item. Unfortunately, that's a bad design.

Imagine I have another behavior called SpawnHeartWhenDestroyed which makes it so an entity drops a heart refill when its destroyed. Currently, when I throw an item and it hits the ground, it's destroyed. Would I really want to add a property to ThrowableItem, called SpawnsHeart that duplicated this behavior? What if I wanted a certain instance of an enemy to spawn a heart?

What's the point in having these nice componentized behaviors if I can't reuse them at will?

To solve this problem, I simple made it so an Entity can have multiple attached behaviors. Do you want your blob to also be a light source, spawn a heart when destroyed, and shake the screen every 6 seconds? No? Well, if you could if you wanted to. As a much more practical example, here's a torch that has both ThrowableItem and PointLight attached, which was the original inspiration for this architecture change. Check out the video!



Here is what the level editor looks like with this new feature. Launching the 'entity properties editor' of a single entity now shows a tabbed collection of behaviors--each with their own properties--instead of just one. Add and remove properties at will!

The entity properties editor--note the multiple behavior tabs


But remember, in the end, these behaviors are all acting on the same entity--that is, they could easily conflict if they both modify the same entity in incompatible ways. For example, you can't just make a Blob a ThrowableItem. You can try, but you're sure to get some weird behavior, since both of those things interact with the player in a mutually exclusive way. Is this a limitation of the engine? No, definitely not--the types of behaviors you'd expect from something that's both an enemy and throwable are just too complex to be automatically applied--it takes actual human programming to decide what it means to be an enemy and throwable. For example, does the enemy still damage the player when colliding with the player model? Does the graphic change? The engine can definitely allow you to define these things, but you'll have to do it explicitly in code.

As a bonus, here's a fun little video of a spotlight that follows the player around when he gets close.


Next up I'll be doing some things with music. Stay tuned and thanks for reading!

Sunday, July 7, 2013

Throwing Stuff Around

This is going to have to be a quick post because I don't have a lot of time, but still want to keep readers updated.

I added the ability to pick up and throw items. I've implemented a Behavior called 'ThrowableItem' which can be attached to entities in the editor. Here's a little preview:


And here's a full video, in which you can enjoy the sound and music of Link to the Past (temporary!)


When I finished doing this, one thing I immediately realized is that I couldn't have a brazier be *both* a light source AND a throwable item, because currently the editor (and game infrastructure) allow the (optional) attachment of exactly one Behavior to each entity. I'd like to be able to pick up a lit brazier and throw the light around. So, today I'm going to make it so you can attach multiple behaviors to one entity. This is going to require three things: an infrastructure change in the engine, a level editor change in the GUI for the behavior editor (which will become the behavior collection editor), and a small refactoring in the current implementation assembly.