Thursday, December 5, 2013

Videos from a Rainy Area


Oh, hey? Yes, I'm still here. Yes, I'm still working, and yes, there are more videos. Hope you enjoy these and the progress so far!


Hiding stuff in a bush:


Having fun with bushes:


Obelisk Mode. Note, this is just for demo purposes, the game won't work with 'portals' like this.


Moving up and down. Going through 'warp doors' and jumping off cliffs.




Sunday, November 10, 2013

Hard at Work

Okay, you've probably caught on now: I have been too busy actually working on the game to give super detailed updates on this blog. Hopefully you've enjoyed the development videos I've been posting, because for the time being that's what I will continue to post.

Here's the last few video I've put together that show my progress:

1. Super Dash

I'm testing out an ability which lets the player go really fast.


2. Step Triggers

These are switches that you activate by stepping on a switch on the ground.


3. Delay Triggers

A special entity that simply delays a trigger and then relays it to the next one. Good for sequenced events.


4. Idle Animation

Allison put together a nice idle animation for the main character.



5. 3-hit combat demos

Playing with a simple 3-hit combo system.


...same thing, experimenting with variable knockback forces. Also we've started prototyping a new environment.






Tuesday, October 22, 2013

More Video Updates

Lately I've been moving. Not much time for blog posts, but I've still got some work to show! More videos!

1. First up: falling down transitions. I wanted to put a little twist on the fade out / fade in 'falling' transition. This was the first draft:


2. After some feedback, here's what I ended up with as a final draft. Way better.


3. Just for fun, I added some bugs.


4. Some people suggested that the bugs shouldn't be squished when you're holding still. Someone else took it further and said that they should climb up you. So I did that.


Stay tuned for more!

Monday, October 7, 2013

Lots of Fun Development Videos

Although I haven't been updating this blog as regularly as before, it's only because I'm too busy with the actual game programming it self. I'd like to share a few videos that I've created over the past month or so.

I keep fairly active on twitter, so follow me at https://twitter.com/ericswheeler

1. The Chasm

Just a fun little demo of dynamic floor tile scripts.



2. Persistence

Here's a video where I demonstrate values persisting between level loads. When the bridge spawns, the player can leave the screen and come back and it will still be there.




3. Dashing and Torches

In this video, I've swapped out the old (temporary) character art for a new (also temporary) sprite that represents the actual dimensions of the character. We chose a large head so that the player can fit in a more square-like area because we want the visual appearance to be close to the collision box, so the player can gauge whether or not something is going to collide with the player. This video also features dashing and dash-turning. And some fireballs. And a shield.



4. Switches

Implemented some switches. You can also see the design of the main character taking shape. One of the switch types is a basic toggle, the other is timer-based.


5. Fog

A cool fog effect.


6. Doors

A set of switches that controls the opening and closing of a door.


7. Section entry events / group destroy triggers.

This final video shows how something can occur when a room is entered (the door is closed) and how destroying a group of tagged entities (3 blobs, in this case) can trigger something else (door opening, wall torches being lit, fog fading away.



That's it for now!

Saturday, September 14, 2013

Parallax Pits and Player Proportions

I've actually been spending a considerable amount of time lately working with the artist on the project, Allison. We've been slowly nailing the style down for a number of things and anticipate future art development to become more and more rapid as we settle into a groove. Refer to the following screenshot:




Anyway, here's what's new since last update:

1. Improved cave tiles. I'm not going to write too much here, but things are looking better now. We're still going to have another pass at them later.

2. Parallax tiles underground. You can't see it in motion from the screenshot, but the underground spikes are parallaxed, meaning they give the illusion of depth by scrolling at a slower rate than the foreground.

3. Player proportions. You're seeing a temporary placeholder sprite as we test the final dimensions of the character. The goal here was to keep the player's collision model at exactly 1x1 tile (that's 32x32 in-game units--actually, we're using 30x30 so that the player can comfortably fit between two tiles). The older character also had these dimensions, but was much taller. It's pretty important to me that the player can accurately and easily gauge, visually, the collision model of the player (so that they know when they're going to get hit by an enemy or a projectile), so the closer the character is to a 32x32 box, the better. Obviously we can't make the graphic square, though, because that would look silly.

4. Better stylistic lighting. Since lights can be any texture we want, we chose to paint a nice cartoony glow instead of the straight gradient of the old textures.

That's all for now!

Saturday, August 31, 2013

Lurker!

It's been a couple weeks since my last substantial blog post, so I'd like to share what I've been working on during the last week: a new monster I've tentatively dubbed the Lurker, but I plan on giving it a more original name in the future. Note that enemy development isn't normally going to take a week per enemy--I was busy with some personal things. On top of that, this enemy is more complex than my first enemy, the Blob, and required a few engine infrastructure changes to implement. I'll go over those in detail, too!





I wanted to add a new enemy which will appear in the first area of the game, Gloomstone Cave. The idea is that you'll be facing this enemy in a few spots, but initially you won't have a permanent weapon to defeat it with. So, you must come up with more creative ways to pass the first few of them, whether it be total avoidance or using an alternate means of damaging them. Later on, when you have a weapon, they won't be quite as menacing, but the initial encounters should definitely impart fear into the player.

The Lurker waits underground in a certain spot and emerges if the player gets too close, bursting out of a hole in the ground, blowing debris everywhere. Here is my temp MSPaint art that I used as a placeholder for the enemy sprites so I could start working on the programming immediately:



This is why I'm working with an artist. Body/front/side/back views.

I took a series of videos as I developed the Lurker, so you can see how it's evolved. At the end, I'll share the source code so you can see what the latest programming model against my engine looks like!

Part 1: The Lurker Emerges
This video shows the lurker enemy popping out and swaying in a sinusoidal motion. It doesn't do anything interactive yet.


Part 2: Moving around, basic attack
The lurker moves around to avoid the player, and randomly attacks him if he gets close. Then it gets stuck in the ground, and that's it.




Part 3: Real art, attacking, taking damage
At this point, Allison, the artist, started working on the graphics. The initial design was a cool and creepy monster with a sideways mouth, with plans to animate it chomping the player. You can see the down-facing graphic in this video. Also, I made it so you can hit the head with weapons, and a very basic 'death' animation for the enemy.



Part 4: Better art, cool hit colors, refined movement. Better death animation.
I make various (subtle) changes to the movement behavior of the enemy. Allisonchanged the head design of the monster to a more traditional shape, drew the other angles of the head, and animated the arms on the body segments. So now it's like a creepy dragon caterpillar. I added a more satisfying death explosion animation.




Part 5: Bigger, badder, more explosive
We increased the size of monster. A way better death animation with a bunch of explosions for the head, and a prettier hue shift during hits.


Part 6: Snapping animations, cave shards. Job's done!
I put the monster in its natural habitat, the cave. Allison added animation frames for the head so I can control its mouth--when it pulls back, it opens its mouth, and clamps shut as it attacks. Also, it opens its mouth in pain when I hit it. Allison also made better looking shards that explode out of the ground when the lurker emerges, and I added a rumbling sound (and a temporary hit sound) for the emerging stage.  Check out the final product!



Source Code

Disclaimer: I don't typically comment code except for explanations of 'why' (not 'how'--see this article) but I've marked this example up extensively for clarity. I also don't typically have classes that are anywhere near this large; my average class size is quite small, generally following the Single Responsibility Principle. I would probably consider breaking each enemy state into its own class. But, for the purposes of this example, I'm keeping this all in one class (for now).

Anyway, the full source code can be found on this pastebin, right here.

That's all for now. Next I'm going to be fleshing out a bit more of the first cave area.



Monday, August 26, 2013

A Few Quick Development Videos

I don't really have time to write a blog post, but here are a few development videos!

Major Area introduction:


Experimental enemy 'Lurker' dev video part 1:


And part 2:




Friday, August 16, 2013

Frame Interpolation

Doing things a bit out of order this week: a video first! Keep in mind, this is a tech video about framerates:



The first part of the video shows the game running at a rate of 20 physical frames per second. That is, the game updates its world model 20 times per second. Although the real rate will probably be 60 updates per second (as is common), the very small limit of 20 serves to illustrate the new feature better.

In this first part of the video, the game loop is running rather naively. Although the game only pushing out 20 physical frames per second, it's actually rendering hundreds of frames per second (on my 5 year old desktop, it's something like 750-850 FPS for the demo scene). But the game is still choppy, because it's still only updating the world model 20 times per second! With 750 rendered frames per second, it's actually rendering the same frame over and over about 37 times before the next physical update. What a waste!

The second part of the video is much smoother. You might think that I pumped up the physical frames per second to a much more reasonable number, like 60 or 200 (or maybe not--I think YouTube caps the framerate at something fairly low). But I didn't--the game is actually still running at 20 frames a second. Except now, I've introduced frame interpolation.

Frame interpolation is exactly what it sounds like--in between any two frames (namely, the last update and the update before that one), I am able to calculate an interpolated position. For example, let's say a monster was at position (30, 40) at frame #53, and at position (40, 50) at frame #54. The sequence of events is as follows:

1. The game update loop begins frame #54 (that is, the physical world updates--this has nothing to do with rendering!)
2. All entities (actually, just entities that need to be updated--more on this later) have their transformation stamped/saved. This occurs before anything else, so the stamped values represent the position, rotation, scale, etc. of the entity as it was at the end of frame #53.
3. The update loop runs as normal for frame #54. Entities might move around via controllers, scripts, collisions, or whatever.
4. At this point, the monster has a stamped position of (30, 40) and a current position of (40, 50).

Now that the update is complete, the rendering process begins. Let's say we're running at 20 physical frames per second--that's a whopping 50 milliseconds before the next update must occur, and we'll use the time to draw as many frames as possible. Let's say that rendering a frame takes 5 milliseconds.

1. The game loop needs to accumulate 50 ms before the next physical update. We set the counter to 0.
2. The interpolation value is calculated as 0/50, or 0
3. The game renders with an interpolation value of 0, which means the monster is drawn at (30, 40).
4. The counter is incremented to 5, so the interpolation value is now 5/50, or 0.1
5. The game renders with an interpolation value of 0.1, which means the monster is drawn at (31, 41).
6. This process continues until the interpolation value exceeds 1, at which point the monster will be drawn near (40, 50) and the next update will occur.

This exact process is what you're seeing in the second part of the video. Of course, it's still a little messed up--you can see a noticeable lag in when the sword swings and the enemies are hit. No matter how smooth the rendering is, a game updated at 20 frames per second is going to play weird and have lag.

Other than the points below, that's all for this week. I've also implemented a shield, but I'd rather wait until I have real graphics instead of the hideous temp art before showing off that one. Until next week!

Some points of interest / objections:

Why not just run at a variable timestep, updating as often as possible?

A lot of games do this. Sometimes it's a good solution, sometimes it isn't; I've read both sides of the argument. In the end, I've chosen for a fixed timestep because it yields repeatable and reliable physics results. Variable timesteps can lead to quirky behavior, such as giving those with a lower (or higher) framerate an 'advantage' in certain situations.

If the interpolation is linear and the object is moving non-linearly (e.g. sinusoidal or accelerated movement), won't that look weird?

No, not really. It's almost totally unnoticeable at 60 FPS.

So why bother at all?

On some machines, I was getting a very noticeable stutter when the timestep was capped at 60 FPS. The very popular "Fix Your Timestep" article illustrates why this happens (near the end). This stutter was a result of 'temporal aliasing.'







Thursday, August 8, 2013

Dash, Kick, Stamina

I'm a bit busy and don't have time for a proper blog post, but I still had some time to do some fun things recently. So, I'll just leave this video here, which shows a kicking (the lights), dashing, and a stamina system.

Until next time!


Thursday, August 1, 2013

Gloomstone Cave: Texture Splatting and Sets, Music Fades

The best way to make an engine is to make a game with it. That's why pretty much all of my updates include some sort of practical 'demo room' scenario. Building out engine infrastructure in a vacuum is going to leave you with a worthless engine, so each update includes some sort of practical and realistic gameplay situation to help me discover gaps in the engine features.

Ever since adding layer-over-layer functionality and minor changes to the behavior structure, I knew I was just about ready to start building real levels. A delay in asset creation due to Comic-Con meant I had about a week to do something else, so that's where the distortion effects update came into play.

So, armed with some fresh cave tiles, I started making a game by designing one of the first accessible areas of the game, Gloomstone Cave. Two missing engine features quickly became apparent after I made a few interconnected rooms:

1. Randomized texture sets

Check out this level editor screenshot from a (fairly empty) cave room:

Design time...

Now check out that very same spot, in-game:

...and run time.

Notice anything? It's subtle, but there are actually two separate floor tiles--one is 'plain' and one has a slight pebble decoration to it. But if you look at the level editor, you'll see that it's all just one tile.

I'm not claiming this is some kind of technical wonder. It's not--it actually only took me a few minutes to add random texture set metadata to the texture editor, seen below.


A screenshot of the texture editor for a single texture, cave_floor

So, if two textures are tagged with the same 'random set' property, it means at run-time, when the level loads, a random texture from that set will be selected. The reason for this? There's absolutely no point in wasting design time picking out from a set of tiles when the goal is to just randomize them anyway. The minute I caught myself alternating between the pebbled and regular floor tile, I realized that adding this little feature would save me hours of time later.

2. Texture Splatting

Another thing I noticed when designing the cave floor is that it was boring with the same tile (well, two similar tiles) everywhere. I had a temporary 'path' tile to use, which looks like this:

A temporary path tile, straight from Google Image Search


Problem is, this looks pretty mixed in with the cave floor:

Bad

The naive solution to this issue is to make nice 'edge' versions of the tile so that it blends in. I might need one for each side--I can't just rotate them, since that would throw off the tiling of the rest of the image. Oh, and wait, I need 4 more for the corners. Oh, that's just the convex corners, I'll need 4 more for concave corners. Now we're talking about 12 tiles.

That's doable, but a pain in the ass to maintain. If I want another path--or grass growing on the cave floor, or water--do I have to create 12 more tiles for each one? When I make a change to one of those tiles, do I have to update all 12? Yes. That sucks.

What if I could just dynamically apply an alpha map to the single tile, and do it at run-time instead of design time? That's sort of the idea behind Texture Splatting. So, I added that to the engine. Here's how it works.

1. Create a custom splat map

I still have to create those 12 'edge tiles', but I create it as a re-usable alpha map. Here's what it looks like currently. I will probably add a lot more of these, including some more rugged ones. I call it my 'splat map.'

That's actually 12 square textures, even though it kind of looks like 6.

2. Pass in the splat map texture along with texture coordinates to the fragment shader.

A few of my shaders now have a segment of code like this:

   if (fragSplatUV.x >= 0)
   {
     result.a *= texture(splatTexture, fragSplatUV).r;
   }

So, entities (which includes tiles) have optional 'splat coordinates' that get passed in. I use -1 for the x value to indicate 'no splat', which saves me from having to do an extra texture lookup (which is probably true for about 95% of the entities). Note that I only use the red component of the splat map; maybe in the future I could use the same texture for other metadata, and only store splats in the red channel.

3. Apply splats in the level editor

Pressing 'b' in the level editor launches the splat picker:

The splat selector

4. Make things a bit prettier

Here's a result of me adding a bunch of splats to that ugly segment:

Looks better than before

And here it is with an old grass texture:

It's grassy now

So, even with minimal extra textures, it looks much better! Of course, it still isn't perfect--it still looks slightly artificial instead of organic. But that's just a matter of updating the splat maps to have more rugged and randomized edges instead of the straight lines and perfect curves that are there now.

That's all for the texture updates.


3. Music Fading

Just a quick little feature, I made it music transitions smoothly from one area to another. Here is a little video that shows that, as well as demonstrating a basic walk-around through various empty sections (I'll be making them more interesting really soon). Also, check out the cool light shaft effect at the cave entrance!

Note that I'm using palace textures outside since I don't have the proper outdoor cave textures yet.

Enjoy the video and see you time!







Monday, July 22, 2013

Distortion Effects

Hi, it's time for a super rad tech update! Last week, I mentioned something about music...well, I needed some art to really show it off correctly (Art? For music?) so I did something else this week instead: distortion effects!

This update will include a fairly technical overview of my graphics pipeline. If you'd rather just see the cool special effects, you can skip straight to the bottom and enjoy the video.

A popular graphics technique is to warp sections of the screen. This can be used for all sorts of effects, such 
as shock waves, fish-eye lenses, translucent surfaces, heat hazes, etc. Here are some examples, taken from the first few pages of a Google image search for '2d distortion shader':




Step by step, I'm going to go over how I implemented the infrastructure in the engine for Super Obelisk. For clarity, I'm just going to go over the entire graphics pipeline, including the lighting and slabs (layering) components. One nifty little trick--and this is very useful during development--is the ability to save the contents of the current OpenGL frame buffer (i.e., render surface) to a file, so we can see what's being drawn behind the scenes before the final image is presented to the user.

The following image is our final result for this example.




Note the following features:

1. The scene has multiple slabs. The player is rendered on the bottoms slab. Some blobs are on the top slab.
2. Lights from the bottom slabs don't shine to the top slab. Lights from the top slab, however, shine to the bottom slab.
3. A distortion bubble on the lower slab is obscured by the upper slab (overpass). Also, the bubble does not show the contents of the upper slab.
4. The distortion bubble on the upper slab, however, contains the entire scene, including the other bubble on the lower slab
5. Two distortions on the right side of the screen represent a pure left shift and a pure right shift. Where the sprites overlap, there is no distortion, since they cancel each other out. This is kind of hard to see without zooming way in and looking carefully at the floor tiles.

And now, how we got to this scene. The engine gathers up all render data and sorts it into separate rendering layers. In this example, there are two slabs being rendered. I've removed the HUD for simplicity. Here's what happens:

1. Lights on both slabs are rendered.

Lights are on a special layer type and are drawn with a very simple shader which converts world position to screen position via a transformation matrix. The results are stored in the lights buffer. Note that the 'clear color' is the ambient color of the section.

Because we want lights from the upper slab to shine down onto the lower one, we render all lights that are 'on or above' the current slab.

 Here is what the lights buffer looks like after this:



2. Distortion sprites on the lower slab are rendered.

Next, we'll render the distortion sprites to the distortion buffer. The shader for these is the same as the lights buffer--a very simple world-to-screen transformation. I've actually slightly modified the output here for demonstration purposes only--since an alpha value of '0' is typical for pixels (keep reading to find out why), you wouldn't even be able to see most of the image if I didn't set set the alpha to '1' for this image. The result of this rendering:




3. The scene on the lower slab is rendered and combined with the lights.

At this stage, the bulk of the rendering occurs. All remaining entities (tiles, players, enemies, particles, etc.) are rendered onto another buffer, which we'll call bufferA. The shader program for this part of the rendering uses the light buffer from step 1 as a source texture. Each pixel multiplies its output with the corresponding pixel of the light texture. Although not pictured, this shader is also responsible for hue shifts like when an enemy gets hit and flashes different colors (or when a bomb is about to explode). Here's what we have so far; keep in mind there's something fishy about how the alpha channel is saved to the file, which is why the shadows look odd in some of these pictures:



4. Distortion is applied

Now, we take the output from step 3 (bufferA, in this case) and copy it to bufferB. The copy will use the distortion map from step 2 to warp the picture. The default case, where eveything (RGBA) is 0, will just do a straight copy of the pixel (no distortion). So, on bufferB, a pixel at (25, 10) would be sampled directly from (25, 10) on bufferA.

But if the pixel isn't pure black, we apply distortion on the X and Y axis
  • Red represents a distortion to the left
  • Green represents a distortion to the right. 
  • Blue and Alpha represent a distortion up and down, respectively.
A popular distortion scheme is to use only the Red and Green channels of the texture for left and right distortion; Red values less than 0.5 (or 128 on a 256 value scale) represent a left shift, while Red values greater than 0.5 represent a right shift. Green controls the Y axis. Here is an image I found online that is used for that distortion algorithm:


The problem with this scheme is that it doesn't work well with additive blend, meaning distortions can't be overlapped with any meaningful result. For example, a solid sprite with RGBA = (0.4, 0, 0, 0) would be a slight left shift. But two of those overlapped with additive blend would be RGBA = (0.8, 0, 0, 0)--which is actually a right shift, instead of a stronger left shift!

My scheme, however, correctly intensifies the shift as sprites overlap--or cancels the shift out entirely if, say, the Red and Green components line up (which is exactly what the example image is showing with the distortion boxes!)

Anyway, here's the output of bufferB:



5. Render the lights on the next slab.

We clear the light buffer from step 1 and render the lights on the upper slab only. Lights from the lower slab do not affect the items we're about to draw!



6. Render the distortion sprites on the next slab.

We clear the distortion buffer and do the same as in step 2. Nothing fancy here. There's only one distortion sprite on the upper slab:



7. The normal scene of slab 2 is rendered onto bufferB, combined with lights from step 5.

Remember, bufferB represents the entire output of the first 4 steps. We're using the "painter's algorithm" here--we're going to draw right on top of all the other stuff we just drew. Note that we do not clear this buffer before this step, which is crucial. Here's what we have now:




8. Distortion from step 6 is applied.

Similar to step 4, we're going to apply distortion. Except this time, we're sampling from bufferB and writing to bufferA. For every slab we render, these two buffers are alternated--we need both of them, but once we're done copying from one to another, we don't need the first one any more, so we reuse it.

Note that the distortion sprites on slab 2 may sample distortion output from slab 1! So, sometimes you can see a ball reflected on another ball.



...and we're done!

So, what can this be used for? A little reflective orb here and there looks cool, but has limited usage. I put together a video of me walking through a few rooms showing different ways this effect can be used--and I've only had it for a couple days, so I haven't gotten around to discovering what other things I can do with it yet. Check out the video! Also, if you like seeing stuff like this in development--including horrible screenshots of when things go wrong--I usually put those on twitter, at https://twitter.com/ericswheeler .

Video:


Thanks for reading/watching and see you soon!



Monday, July 15, 2013

Combining Multiple Behaviors

In last week's entry, I made a note at the end that there was a limitation in the way I had designed how Behaviors are attached to Entities--namely, that I was limited to one behavior per entity. This post is about how I solved the issue.

Technically, that's fine, since Behaviors are just references to code, and can be as complicated as I want them to be. But it doesn't necessarily yield good design with respect to reusable components. Let's take an example.

Let's say I have an entity with the lit torch graphic and I attach the ThrowableItem behavior to it:



And here's another entity with the PointLight behavior attached (the PointLight behavior attaches a child entity configured as a light source,  and has a few editable parameters such as size and color).



But what if I want to pick up the torch with the light on it? One solution would be to introduce an editor property to the ThrowableItem behavior, something like IsLightSource. Then, when the behavior is initialized, I could add a light entity as a child item. Unfortunately, that's a bad design.

Imagine I have another behavior called SpawnHeartWhenDestroyed which makes it so an entity drops a heart refill when its destroyed. Currently, when I throw an item and it hits the ground, it's destroyed. Would I really want to add a property to ThrowableItem, called SpawnsHeart that duplicated this behavior? What if I wanted a certain instance of an enemy to spawn a heart?

What's the point in having these nice componentized behaviors if I can't reuse them at will?

To solve this problem, I simple made it so an Entity can have multiple attached behaviors. Do you want your blob to also be a light source, spawn a heart when destroyed, and shake the screen every 6 seconds? No? Well, if you could if you wanted to. As a much more practical example, here's a torch that has both ThrowableItem and PointLight attached, which was the original inspiration for this architecture change. Check out the video!



Here is what the level editor looks like with this new feature. Launching the 'entity properties editor' of a single entity now shows a tabbed collection of behaviors--each with their own properties--instead of just one. Add and remove properties at will!

The entity properties editor--note the multiple behavior tabs


But remember, in the end, these behaviors are all acting on the same entity--that is, they could easily conflict if they both modify the same entity in incompatible ways. For example, you can't just make a Blob a ThrowableItem. You can try, but you're sure to get some weird behavior, since both of those things interact with the player in a mutually exclusive way. Is this a limitation of the engine? No, definitely not--the types of behaviors you'd expect from something that's both an enemy and throwable are just too complex to be automatically applied--it takes actual human programming to decide what it means to be an enemy and throwable. For example, does the enemy still damage the player when colliding with the player model? Does the graphic change? The engine can definitely allow you to define these things, but you'll have to do it explicitly in code.

As a bonus, here's a fun little video of a spotlight that follows the player around when he gets close.


Next up I'll be doing some things with music. Stay tuned and thanks for reading!

Sunday, July 7, 2013

Throwing Stuff Around

This is going to have to be a quick post because I don't have a lot of time, but still want to keep readers updated.

I added the ability to pick up and throw items. I've implemented a Behavior called 'ThrowableItem' which can be attached to entities in the editor. Here's a little preview:


And here's a full video, in which you can enjoy the sound and music of Link to the Past (temporary!)


When I finished doing this, one thing I immediately realized is that I couldn't have a brazier be *both* a light source AND a throwable item, because currently the editor (and game infrastructure) allow the (optional) attachment of exactly one Behavior to each entity. I'd like to be able to pick up a lit brazier and throw the light around. So, today I'm going to make it so you can attach multiple behaviors to one entity. This is going to require three things: an infrastructure change in the engine, a level editor change in the GUI for the behavior editor (which will become the behavior collection editor), and a small refactoring in the current implementation assembly.

Sunday, June 30, 2013

Four Features

Four quick features added to the engine this week:

1. Layer Sort Types

There are now two ways to configure the rendering algorithm for a given layer. The first method, 'batch mode', is what I've been using all along. If you're familiar with graphics programming, you've probably employed this technique many times: minimizing the number of render state switches (and texture switches) can drastically improve performance. So, quads that are similar are grouped together and drawn as batches.

2D games often don't use depth buffers (or if they do, it usually means a sacrifice with respect to partially transparent pixels)--instead, they use the 'painters algorithm', which just means drawing things front-to-back. But batching things together can lead to undesired results. Check it out:

The player is rendered incorrectly

In this case, as it happens, the blobs are all from the same texture, so they're batched together as a single call. Then the player is drawn. But that looks wrong, since the blobs right below the player should be rendered after the player.

In fact, what we really want to do here is sort every texture to be drawn by its lowest Y-coordinate (bottom) and then draw from the top down. So, the blobs 'above' the player (i.e. on the top half of the screen) are drawn, then the player, then the blobs 'below' the player.

Here's what that looks like:

Much better.

The question is, how can optimal batching and this drawing algorithm be achieved? The first picture is done in 2 batches, the second picture in 3 batches. And the answer is that I don't know and I don't really care right now--the majority of the sprites in the game aren't going to need this slightly slower rendering process, just ones on the layer I've dubbed 'action layer' (where the player, enemies, and a few other things lie on). The majority of entities will still live on 'batched' layers--particles, floors, walls, weather effects, font characters, etc.

2. Advanced Model Configuration

I had some pre-defined model 'shapes' built into the engine, but that wasn't really the best design. This older blog post (namely, the video linked in it) shows how I would go about adding a new shape. But as soon as I realized I needed a special shape for the 'blob' enemy (and surely many, many more as I added more enemies), it was time to move model declarations out of the engine and into the implementation assembly. That is, every game using the engine can define their own collision shapes dynamically--and use them in the editor!

Here is a picture that shows the blob model outline with a series of particles that stop when they 'leave' the blob:

The white particles stop when they reach the blob's collision model edge

This may seem minor, but I want to focus on having a really tight gameplay experience, and having complete control over collision models is necessary for that.

3. Screen Shake

Well, this one's simple enough. I added support for screen shaking (which is really just rapidly moving the camera). Here's a clip.

Big blobs landing shake the screen

For testing purposes I coded it so that when a blob lands it shakes the screen at a certain strength for a certain duration, but it's a one-liner in code now so I'll be able to quickly add it wherever appropriate.

4. Font Kerning

I fixed my font renderer so that it uses kerning data to properly display characters.

Proper letter spacing, yay

Compare this with this old screenshot, which has horribly messed up spacing:


So, that's all for this week. Sorry, no development video this week, just the little screen shake thing.