Sunday, October 23, 2011

Building Your Own Game Engine - Graphics

This is the last one in the series of articles that go through the architecture of a small game engine for a small game, The Jelly Reef. The very last thing that was implemented into the game was the graphics. The art team in the first weeks of development was looking for the right look for the game. In mean time the our designer was looking for the right feel, we decided give this top priority and started writing the gameplay code and all the infrastructure. The graphics would wait up until we know what the game will be and look like.

With four talented drawing artist on the team, we knew that the game will be hand painted for the most part and that we will use sprite sheets for all the animations. You would expect this to lead to a 2D rendering system. After all, the math in all previous articles was 2D and Vector2D's were all over place. But at the same time we wanted to have some depth to our top down perspective and considered using parallax effect to achieve that. Using parallax would have some drawbacks and might be difficult to tweak or draw art for. Instead we opted against it, so the rendering system is actually a 3D one. There is a 3D projective camera and 3D triangles are being send to the GPU so the depth is inherent.

The goal was to have a system that can render static images in layers that would make up the level, some animated sprites for the object in the game and lots of bubbles. All this without killing performance, of course. Almost everything is drawn alpha blended, but some things like the GUI overlay and seaweed would require special treatment. Additionally we also wanted to be able to scale and tint all the sprites.
In the core of our graphics system is the RenderJob which the only thing that our render manager can render. From its fields it's very clear how we implemented the needed functionality.
public enum RenderPass { AlphaTested,
                         AlphaBlended,
                         Overlay }

// All the data need for rendering a object on screen
// A class rather than a struct to enable reference storing
public class RenderJob : IComparable<RenderJob>
{
  // In which pass should the geometry be rendered
  public RenderPass RenderPass;

  // Used for depth sorting in alpha-blended jobs
  public float Height;

  // The vertices in world or model coordinates
  public VertexPositionTexture[] Vertices;

  // Indices of the vertices
  public int[] Indices;

  // If vertices are in model coordinates a suitable transform
  // matrix should be submited, if vertices are in world
  // coordinates the identity matrix should do fine
  public Matrix WorldMatrix;

  // A texture to draw this geometry with
  public Texture2D Texture;

  // Allows the texture to be tinted with a solid color
  public Vector4 Tint;

  // Allows a render job to disable the caustics
  public bool DisableCaustics;

  // Create new render job
  public RenderJob()
  {
    Tint = new Vector4(1, 1, 1, 1);
  }

  // Compares two render jobs
  public int CompareTo(RenderJob other)
  {
     if (Height > other.Height)
       return 1;
     if (Height < other.Height)
       return -1;
     return 0;   // ==
    }
}
The rendering in our game is handled by the rendering manager and all entities that would like to be rendered need to implement the IRenderable inteface.
public interface IRenderable
{
  IEnumerable<RenderJob> GetJobs();
}
The manager simply goes over all the entities and fills up three lists, one for each render pass. The list with Alpha blended jobs gets sorted by height, from the lowest to the highest. Then the lists are rendered in the order they were declared in the enum aboove.

Each level can have up to five layers and they are rendered as any other alpha blended job. The height, and hence the parallaxing amount, can be set for each layer. The size is automatically determinate to fit the level size with some clearing for the parallaxing. The level layers can be drawn as layers in Photoshop. In the game, their alignment at the the middle of the screen would be the same as while drawing. This means that parallaxing does not interfere with gameplay. The game take place underwater. I really wanted for us to get that feeling early on by adding water caustics and started looking for real-time solutions on the subject. In the end none of the options looked convincing enough and many were too computationally expensive. We quickly opted for a precalculated sequence of images that tile in both, space and time. The image is applied in the shader and the strength of the effect can be tweaked per level. The shader also scales the effect with depth, adding another subtle depth cue.

This wraps up the articles series. I hope people will find them useful. If anyone needs more info, you can drop a comment below the article.

A new version of the engine is already in the making. It's based on a somewhat different concept, written in C++ and trying to be as platform independent as possible.

Tuesday, September 27, 2011

Building Your Own Game Engine - Collisions

This article discusses collision detection and resolution. It directly follows my previous article on physics. I have decided for two separate articles so that the text doesn't run longer than your (and mine as well) attention span.
Most physics engines combine collision detection system with the rest of the engine, but many also allow the one to used without the other. In the case of The Jelly Reef, both of the systems are simple and completely separated except for the objects they act upon. Basically, first all the forces are applied and paths integrated. Then using the new positions collisions are found and resolved.
To do collision detection we first need to have bodies that collide. Early on, we decided that for all objects in the game the collision shape will be approximated with either a disk or a rectangle. Moreover, every objects stores both and the two are always in sync. The disk is also circumcircle of the rectangle at all time. Having a disk around the rectangle allows for early outs when doing collision checking, as disk to disk is a very cheap test to do.

public class BaseGameEntity : ISceneNode
{
  // Some code...

  [Description("Bounding radius of the entity. Used for collision detection")]
  public float BoundingRadius
  {
    get { return boundingRadius; }
    set
    {
       float s = value / boundingRadius;   // scale
       boundingRadius = value;
       scale *= s;
     }
  }
  protected float boundingRadius;

  [Description("Half dimensions in a single vector.")]
  public Vector2D Dimensions
  {
      get { return scale; }
      set
      {
          scale = value;
          // Set the bounding radius so that a recatangle with the given
          // dimensions will fit in the disk.
          boundingRadius = (float)Math.Sqrt(value.X * value.X + value.Y * value.Y);
       }
  }
  protected Vector2D scale;   

  // some more code...
}
The game also features some static geometry in the form of walls and these were created from linear segments.
The collision system every frame over the objects and tests for collision among them and collisions against the walls. Each collision is stored in a array of Contact structures. The structure holds all the data that is needed to handle this contact.
public struct Contact
{
  // Holds the first bodie in the contact
  public BaseGameEntity FirstBody;
  
  // The second of these can be null, for contacts with the level
  public BaseGameEntity SecondBody;

  // Holds the direction of the contact in world coordinates.
  public Vector2D ContactNormal;

  // Holds the depth of penetration at the contact.
  public float Penetration;

  // A mechanism to detect collisions to raise event but 
  // not reslove in the physics engine
  public bool Resolved;        

  // A hash code uniquely defining the contact of these two entities
  // Makes sure that each pair of entities is checked exactly once
  public override int GetHashCode()
  {
     int hash = firstBody.GetHashCode();
     if (secondBody != null)
        hash ^= secondBody.GetHashCode();
     return hash;
  }
}
For performance reasons we set a predetermined number of contacts that we can have in a single frame, all others will have to wait for the next frame. We also store all the contacts in a preallocated array so that we avoid allocating new objects every frame and strain the garbage collector.
As I mentioned we have two shapes for collision geometry. With the linear segments for the walls this gives us 3 object types that can collide. Walls will not collide with walls, that leaves us with five combinations of shapes we need to handle. For all cases we use the separating axes theorem.
The theorem states that if the two convex objects are not interpenetrating there is an axis for which the projections of the two object will not overlap. This test is very robust and general, straight forward to implement and easy to debug. It also gives much of the needed information we need for resolution. The contact normal is the axis itself and the interpenetration is the amount of overlap along it.
The problem is of course to find this axis. For polygons such axises run through the vertices or are perpendicular the the line segments. For an intersection test among two rectangles that would yield 16 potential separating axises to be tested. We tackled the problem of optimizing the number of axises for each of the cases separately.
As a en example I will explain a very straight forward test from the game on the drawing below, an axis aligned rectangle against a linear segment.



From the drawing we see that one testing axis is aligned with the line segment and the other one is perpendicular to it. The rectangle and its projections are drawn in blue while the line segment and its projections in red. On the drawing we can see that the projection on the axis perpendicular to the line segment there is no overlap, so we have found the separating axis. The drawing also clearly shows that I have a cool whiteboard, but enough of that, lets show the code for this test.
public static bool AARectToWall(
   Vector2D centerR, Vector2D halfDimR,
   Vector2D start,       Vector2D end)
{
  // Create first axis 
  Vector2D axis1 = (end - start);
  // Normalization gives interpretation to the intervals 
  axis1.Normalize();

  // Project the line segment by projection the end points
  float Sp = axis1.Dot(start);
  float Ep = axis1.Dot(end);
  // Make the interval 
  CreateInterval(ref Sp, ref Ep);

  // Project rectangle - All four vertices
  float rectLeft;
  float rectRight;
  // First  calculate the vertices
  Vector2D A = centerR + halfDimR;
  Vector2D C = centerR - halfDimR;
  Vector2D B = new Vector2D(A.X, C.Y);
  Vector2D D = new Vector2D(C.X, A.Y);

  float Ap = axis1.Dot(A);      // Project vertex A
  float Bp = axis1.Dot(B);      // Project vertex B
  float Cp = axis1.Dot(C);      // Project vertex C
  float Dp = axis1.Dot(D);      // Project vertex D

  // Cretate inital interval
  rectLeft = Bp;
  rectRight = Ap;
  CreateInterval(ref rectLeft, ref rectRight);
  // Extend it
  AddInterval(ref rectLeft, ref rectRight, Cp);
  AddInterval(ref rectLeft, ref rectRight, Dp);

  // If there is no overlap, we have found the separating axis
 if (IntervalOverlap(Sp, Ep, rectLeft, rectRight)  <= 0.0f)
    return false;

  // Second axis, perpendicular to the segment
  Vector2D axis2 = axis1.Perp;

  // The line segment projects to a single point
  Sp = axis2.Dot(start);

  // Project the vertices
  Ap = axis2.Dot(A);
  Bp = axis2.Dot(B);
  Cp = axis2.Dot(C);
  Dp = axis2.Dot(D);

  // Create the rect interval
  rectLeft = Bp;
  rectRight = Ap;
  CreateInterval(ref rectLeft, ref rectRight);
  AddInterval(ref rectLeft, ref rectRight, Cp);
  AddInterval(ref rectLeft, ref rectRight, Dp);

  // Check if the second axis e separating  
  if (IntervalOverlap(Sp, Sp, rectLeft, rectRight) <= 0.0f)  
    return false;
  else
     return true;   // No axis found, shapes are intersecting
} 
The next step after detection of collisions is to call all the subscribers for each type of collision. This is where gameplay code might be implemented I it was discussed in detail in the article on gameplay. The event can choose to set the resolved flag to true, so the collision manager will not resolve the collision. This can have many uses. When a jelly hits a rapidly rotating propeller for example, the jelly will not bounce back but rather be removed from the scene and replaced with some gooey jelly pieces.
The last step is to resolve the unresolved contacts. This is done by simply moving them along the contact normal. In the case of a collision with the level geometry, the object is simply moved the full penetration depth. In the case of two objects, both are moved by taking their mass into account. The flowing piece of code, adapted from from Game Physics Engine Development, does exactly that.
// Some code ...
BaseGameEntity first = contact.FirstBody;
BaseGameEntity second = contact.SecondBody;
                
// The movement of each object is based on inverse mass, so total that
float totalInverseMass = first.InverseMass + second.InverseMass;
                
// If both have infinite mass, skip this contact
if (totalInverseMass <= 0) return;
                
// Calculate amount of penetration resoluion per total inverse mass
Vector2D movePerInverseMass = contact.ContactNormal *
  (-contact.Penetration / totalInverseMass);
                
// Move 'em
first.Position   += movePerInverseMass * first.InverseMass;
second.Position -= movePerInverseMass * second.InverseMass;
// More code...
Hope this last few articles gave an inside into the bits and pieces of game development that are the not easiest to find information on. The next article is on graphics and will be the last one in the series as well.

Thursday, September 1, 2011

Building Your Own Game Engine - Physics

This is the first one of next two articles that will cover two very close subjects, physics and collision detection. Physics for games for the most part covers moving some bodies in the game world according to the laws of physics. As these bodies occupy some space in the world, we need to detect when collisions among this bodies occur and resolve them. I will assume the reader is familiar with Newton's laws of motion, since you are reading an article about making a game engine and physics in particular.
The Jelly Reef actually uses two somewhat separate physics systems. One used to move objects in the game around and it's completely entangled with the rest of the game. And there is another one, based on particles, used for some specific effects. Our game is not trying to simulate the real word or be realistic, so I decided we take simplistic approach, with plenty of shortcuts. First shortcut was to model the jellies and all other moving objects as point mass and this ignore any actual distribution of mass. Angular movement was not simulated at all, so no torques, no moments of inertia or in fact any physical angular property. This means we treat all objects as particles we can basically reuse the code from the particle. What follows are the properties of the particle class.
public class Particle
{
  // The position of the particle in world space
  public Vector2D Position;

  // Linear velocity of the particle in world space.
  public Vector2D Velocity;

  // Linear acceleration of the particle
  public Vector2D Accleration;

  // Damping is required to remove energy added
  // through numerical instability in the integrator.
  // Or to act as drag
  public float Damping;

  // Holds the inverse of the mass of the particle.
  // It is more useful to have objects with
  // infinite mass (immovable) than zero mass
  // Also one division less
  public float InverseMass;
  //
  public float Mass
  {
    get { return 1.0f / InverseMass; }
    set { InverseMass = 1.0f / value; }
  }
  //
  public bool HasFiniteMass
  {
    get { return InverseMass != 0.0f; }
  }

  // Force accumulator, for dalamber's principle
  public Vector2D ForceAccum;
}
Next we need to move these objects. Newtons laws are expressed as differential equations so the values are calculated using numerical integration. For this purpose we use Euler integration as it's very easy to implement, but it is not very precise.
public void Integrate(float dt)
{
  float duration = dt;

  // Update linear position
  Position += Velocity * duration;

  // Work out the acceleration from the force
  Vector2D resultingAcc = Accleration;
  resultingAcc += ForceAccum * InverseMass;

  // Update linear velocity from the acceleration
  Velocity += resultingAcc * duration;

  // Add drag
  Velocity *= (float)Math.Pow(Damping, duration);

  ForceAccum.Clear();
}
Time is continuous (except maybe in quantum mechanics, which beyond the scope of this article and my brain) and our simulation takes discreet steps into it, so errors get accumulated over time. This imprecision can build up and eventually make the simulation explode and I will return to this later.
Although we are not simulating the angular movement, we still need to know the orientation of the objects for both, gameplay and rendering purposes. The solution was to simply align the object to face their direction of movement. However this looks ugly when the direction of movement changes rapidly over few consecutive frames, so the orientation was in fact interpolated from the current one the desired one limiting the change with maximal angular velocity value. This piece of code was not really elegant of flexible and in future I would consider using spinors for this purpose.
As I mentioned before we also employed a separate particle based physics system. This simple system simulated particles on which some forces can be applied. No contacts or hard constrains were imposed on the particles. The most used part of the system were the spring-damper systems.
The seaweed was implemented with such a system. Each weed modeled as a two particles connected to each other with such a spring, while simultaneously having one of them being attached to the seabed with another spring. A third spring from the base to the tip tries to keep the form of the weed. On the image to the left we see the three springs in red using the debug drawing.
The water motion also imposed forces on all particles in it. The smooth movement of the camera was also implemented using a spring.
Particles were used for the bubbles. Obviously the main aim of this dynamic system was to give feedback to the player while he/she plays by controlling the water.
In an early version of the game the jellies had tentacles, which were modeled with a spring system, exactly like the one on the seaweed. Unlike the seaweed, the jellies move and this adds force to the spring system and tentacles become longer. I tried to remedy this by making the springs stiffer and that is where the Euler integration totally failed. As soon as the spring overextends, a force too large will be applied. In the next iteration the spring will overshoot again and in turn generate and even larger force, until the simulation explodes with the tentacles way outside the screen. This could have been solved by using better integrator like RK4 or Verlet Integration. It turned that the jellies look way better without the tentacles anyhow so I was of the hook.

Sunday, August 28, 2011

Building Your Own Game Engine - Why Code Physics

The next two articles will cover two very close subjects, physics and collision detection. I will do so by going through the relevant pieces of The Jelly Reef's engine. But before I jump into that, I will first put a word or two on why (or why not) to develop your own physics system.
Writing physics for your game is difficult and time consuming task. Collision detection is a subject that alone might consume months of work. There are some superb solutions out there, like Box2D, that work right out of the...well box. Free and open source, this engine drives immensely big chunk of the indie hits in the last few years. Angry Birds is not all that different from the samples that come with the engine.
But sometimes you just need to jump over the hoops yourself, and we did. The decision to code our own physics came was based on few facts we knew at the time:
  • Water, being the main element of gameplay is not part of any existing engine.
  • A full physics engine would be an overkill for this game and potentially difficult to control and tweak.
  • The gameplay would be tightly integrated with the physics and the collisions.
  • Most of the popular and well documented engines are writen in C++ and our game is in C#.
  • A lot of time still needs to go into integrating the engine into the game.
  • There is in fact quite a bit of text on how to do things yourself. I recommend Game Physics Engine Development, Game Physics and Real-Time Collision Detection.
  • I have had some bad experience with closed source engines and the next paragraph goes over it.
I was working at ZootFly on a vehicle system for quite a bit. The ZEN game engine used in ZootFly employed NVidia Physx, so naturally I built on top of that. I was in awe by how simple it was to get some things running with Physx. In the next few months the more I worked with Physx the more I got to know it, and my respect for its developers grew even further. And then I had to do some stuff that the engine was clearly not made for (obvious when you look at the samples) and suddenly I was in a world of pain. Tweaking parameters didn't get me anywhere fast and as we had binary-only license there was not much else I could do. Eventually the problems were worked around, but the team made a decision to switch to Bullet for their next project. It's open source and Erwin Coumans seems to be doing an awesome job of running it.
So ultimately it's up to developers the make the decision when they know what they are making. For those that decide to go for coding their own physics and haven't done that before the next two articles should provide some insight.

Thursday, May 5, 2011

Building Your Own Game Engine - Gameplay

This article will focus on gameplay, the part of the code that allows making the game fun. In a smaller game, like ours, the gameplay code can take substantial chunk of the whole code-base. The gameplay in The Jelly Reef was not scripted and completely integrated with the rest of the code; and as such written in C#. The separation between what can be considered engine code and gameplay code was mostly logical but still some physical structure was kept.
There are generally four places where gameplay code goes in our engine. Of course, our primary gameplay element is the water itself as the player interacts with it. The implementation of the water waited for the concept of the game to be completely defined, so after we know what our game will be about we started building the water dynamics. Our first try was with simulated fluid dynamics, but after playing with few games and simulators that use fluid dynamics we realized that it might not be the best fit for The Jelly Reef. Fluid dynamics are somewhat difficult to program and even more difficult to make them run efficiently on a larger scale. But the real problem was that they were not a good fit for gameplay and can be notoriously time consuming to tweak. My solution was to fake it and I am talking Shanghai Rolex kind of fake. The water in The Jelly Reef is a grid of 2D vectors representing velocities, what we (not physically correctly) called flow field. The grid is treated as a gray-scale image and every update is filtered using a set of low-pass filters. The resulting image summed with the current image using a tweakable weight factor. The low-pass filtering is done by convolution with a 3 by 3 kernel, which is chosen based on the direction of the water at that cell of the grid. There are 4 kernels in total; horizontal, vertical and two diagonal ones. Basically the value of atan2 is mapped directly into an index in the kernel array. What follows is a piece of code that does exactly that and in some way is the heart of our game.
// FlowField.cs
public class FlowField : ISceneNode
{
  #region Weights

  static float lon = 3;
  static float lat = 0.5f;

  private static float[,] verticalKernel =
    new float[,]{ {lat, lat, lat},
                  {lon, lon, lon},
                  {lat, lat, lat}};
        
  private static float[,] horizontalKernel =
    new float[,]{ {lat, lon, lat},
                  {lat, lon, lat},
                  {lat, lon, lat}};

  private static float[,] diagonalKernel0 =
    new float[,]{ {lat, lat, lon},
                  {lat, lon, lat},
                  {lon, lat, lat}};

  private static float[,] diagonalKernel1 =
    new float[,]{ {lon, lat, lat},
                  {lat, lon, lat},
                  {lat, lat, lon}};

  private static float[][,] kernels = new float[][,]         
  {
    horizontalKernel,
    diagonalKernel1,
    verticalKernel,
    diagonalKernel0,
    //
    horizontalKernel,
    diagonalKernel1,
    verticalKernel,
    diagonalKernel0
  };

  #endregion

  public void Update(float dt)
  {
    // Some update code...

    // Go through all the cells
    for (int i = iFrom; i < iTo; i++)
    {
      for (int j = jFrom; j < jTo; j++)
      {
        // Get vector angle
        float theta = (float)Math.Atan2(vectorField[i, j].Y, vectorField[i, j].X);
        //
        theta += MathHelper.Pi / 8; // Add half a section
        // get index
        int kindex = (int)Math.Floor(theta * (8.0f / MathHelper.TwoPi));
        kindex %= 8;                // Remove extra sections
        //
        float[,] kernel = kernels[kindex];

        // Convolve using this kernel!
      }
    }
  }
}

All the kernels are energy-preserving, so a dampening factor is used to stabilize the velocity field. All hand movements add energy to the system so without dampening the simulation would in fact exploded. This system made interacting with the water intuitive while giving us the ability to easily tweak its behavior. The video below shows a visualization of the velocity field, something that we stared a lot when designing the first levels. The video below shows the debug view of the our flow field.
Most of the levels in our game had regions that were not part of the simulated water field, usually rocks and other such big obstacles. The water should not flow through these obstacles so those regions should be excluded from the flow field. For this purpose, when loading the level we employ a flood fill algorithm, that expands until it hits a wall, that marks all the cells that are part of the flow field. By skipping calculations on those cells we also optimized the flow field significantly. The flow field’s value can be read using nearest neighbor or bilinear sampling, depending on the quality and performance needs. Bubbles for example use nearest neighbor as there are many of them and the physical simulation takes care of the trajectory smoothing. For the jellies on the other side we used bilinear sampling as motion quality and responsiveness was paramount.

Next place where gameplay can be added is the update method of the GameWorld class. The GameWorld has been designed to be extended and new functionality can be added by the inheriting class. The update method is the place where game wide game-play code should be, like checking winning and losing conditions or similar stuff. The inherited class would automatically keep all the functionality of the GameWorld class, like the streaming and real time editing but it needs to do XmlInclude for all the entity types in the class attributes. Much of the gameplay code is in the Update method of the different entities and most entities are in fact built for specifically for that purpose. Enemies, propellers and pipes are all examples of such entities. If the entity is dynamic it can inherit the MovingEntity class and get all the functionality for physically simulated motion. Finally the game implements a mechanism to respond to collision events. The user could subscribe collisions of different pairs of entity types. For example a collision of a jelly with a cannon ball would result in instant death and the event can be used to play a sound, death animation and so on. What follows is are parts of the code doing exactly that.
// JellyWorld.cs

// Include all types that might be used in the game
[XmlInclude(typeof(Jelly))]
// More includes ...
public class JellyWorld : GameWorld
{
   public void SetEvents()
   {
      // Add Jelly vs Propeller
      int hash = EntityTypes.HashCode(EntityType.Jelly, EntityType.Propeller);
      this.collisionManager.SubscribeForCollision(hash, JellyPropellerCollision);

      // Set more events ...
   }

   public void JellyPropellerCollision(ref Contact contact)
   {
      // Play gruesome death, scary sound and remove jelly
   }

   // More JellyWorld code ...
}
From the two entity types hash is created and is used to index the event method to be called. The hash is calculated by using the first 16 bits to store the first type and the second 16 bits for the second type. As long as there are less than 65536 entity types this works. The method gets a reference of the contact information which can be edited too. The above code for example sets the handled flag to true, so that the contact is ignored by the contact resolver in the physics pipeline. Consequently, physics and collision handling is exactly what the next article will be about. I'll end this post with a montage of our game.

The Jelly Reef from Adriaan de Jongh on Vimeo.

Wednesday, March 2, 2011

Building Your Own Game Engine - Tools II

This is the second article on tools that we developed and used during the making of The Jelly Reef. It covers our profiler, the level editor and few external tools.

Profiler
Performance is essential for any game and knowing how big of a chunk out of the update time goes to different tasks can be very helpful. For this purpose, we built a profiler early on and always kept a watch over the performance. The profiler has few roles. Firstly, to have a bunch of variables that can be used as counters, like counting the number of collision checks, drawing calls, triangles and such. Secondly, to count framerate. And most importantly, to measure execution time of different sections of the code. The profiler is implemented as a singleton manager, as discussed in the previous post. All the counters are accessible via a public dictionary. The profiler itself only draws the counters as name and value pair. It is up to the user of the counter to take care of the value, like resetting the counter zero or incrementing it.
The game uses a fixed framerate of 60 FPS, but we still use a framerate counter to detect if drop in performance occurs and eliminate the cause. For measuring section execution time we use the Stopwatch, which is a high-resolution timer. At the beginning of update the stopwatch is reset. At the beginning of the section in the update, like collision detection or water simulation, a the BeginSection method is called with the name if the section. When done, the EndSection method is called with the same name of the section. The profiler simply samples the time of the stopwatch so section can be nested or overlap. The profiler draws a bar from the beginning of the section to the end, using the width of the screen as a reference for the 0.0166 seconds (60-th of a second) available CPU time per update. So if we reach the right side of the screen, we know we need to optimize something, and we might as well start with the longest bar. The difference between the end and the beginning of a section gives its duration and this is shown just below the bars with the same color as the corresponding bar. The profiler is only compiled when the PROFILE symbol is defined, so just like the debug draw it is not part of the final build. Unlike the debug drawing, it is also compiled with the release builds of the game to track their performance as the C# compiler can optimize the release version quite a bit.

Editor
What started as a runtime inspector for the entities in the gameworld, quickly turned into a editor that we used to create the levels. This means that we used an in-game editor to author and edit the levels, rather than a separate tool. In fact at any time you can press E button on the keyboard and bring up the editor. This means very short turnaround times for our designer, as changing a value like mass or a position of a wall has an immediate effect in game. On the other hand, the user needs to remember that the game is running at all times. Physics and collision can easily be broken by large changes while the game is running. That is why the editor got a pause button, but somehow it was almost never clicked. And the programers need to remember that the same code runs during designing and playing. The editor is not included in the release build and the Surface unit doesn’t need a keyboard in user mode, so tampering with the game is not a problem.
The editor is very much based on features available in C# the .NET Framework. The editor is basically a window with a tree control a property grid, both of which a regular windows forms. The tree control is depicting the world hierarchy and allows the user to select one or more entities in the world. For the window to be able to create this hierarchy all the object that we would like to display in the editor must implement the ISceneNode interface.
/// 
/// Alows browsing the scene graph in a simple way
/// 
public interface ISceneNode
{
   IEnumerable<ISceneNode> GetChildNodes();
}

// GameWorld.cs
public class GameWorld : ISceneNode
{
   // ...
   public virtual IEnumerable<ISceneNode> GetChildNodes()
   {
            foreach (BaseGameEntity o in Entities) { yield return o; }
            //
            foreach (Wall w in Walls) { yield return w; }            
            //
            foreach (ParallaxLayer l in  Layers) { yield return l; }            
#if DEBUG
            yield return DebugDraw.Instance;
#endif
   }
   // ...
}

Using the interface window can build the tree recursively. This is a simple approach also lets a put things in the tree that are not really part of the world, but we would like to be able to tweak them too, like the options for the debug drawing.
The property grid on the other hand required a bit more attention. The property grid in .NET is uses reflection to give the user an interface to tweak any public property of any object, but some care must be taken. Firstly non-primitive types like our vector implementation would need to implement a conversion to a type that can be displayed properly. This actually took few days to get right, so that is doesn’t delete data when some has typed something wrong. Without some kind of description the user interface would be very difficult to use as it might be hard for the designer to know what linear dampening might mean, there might be too many properties that are public and they are not sorted in any way. So the entities have to have attributes attached to all their public properties, as the code below shows.
class MovingEntity : BaseGameEntity // Which implements ISceneNode
{
   // ...

   #if WINDOWS
   [Category("Physical Parameters")]
   [Description("Max turn rate in deg/s")]
   #endif
   [XmlIgnore]
   public float MaxTurnRate
   {
      get { return maxTurnRate; }
      set { maxTurnRate = value; }
   }
   // Field
   protected float maxTurnRate;

   // ...
}
As it turns out these attributes, especially the description, also help keep code well documented for programmers too. We always knew if a rotation property is in angles or radians. The component model API is only available on Windows, so some preprocessor guards come in handy if you would like to compile the project for other platforms like Xbox and Windows Phone. Our classes already had the XML attributes, so the code for a single property can be a bit long, but maintenance and ease of implementation made all the scrolling totally worth.
A very important part of the editor also were the different design modes. With a press of a button on the keyboard the game would switch to wall editing mode or entity manipulation mode. We added those model in a somewhat ad hoc manner and were more or less hacked into the game. Eliminating turnaround  time for the designer was a nice plus and it worked great for this game, but this approach doesn't scale well. In retrospective I think I might have been better to build an editor that still uses the engine with all of its classes and functionality but built a separate editor.

External Tools
Aside from the usual content creation applications like Photoshop and After Effect, we used few plugins for Photoshop some other tools. As we learned Photoshop doesn’t work with explicit alpha when saving in the PNG format, so we needed something that can roll back the time to Photoshop 4.0 when it was the last time it did work. Luckily there was a plugin for that. Another plugin, called Solodify, was used to fade the edges in the color channels so that no white pixels appear when alpha testing is used for rendering. The water caustics were precalculated using Caustics Generator, which available for free. I like to work with FX Composer for developing shaders regardless of the fact that is was a bit of overkill for the few shaders we needed. FX Composer is also available as a free download on nVidia’s developer website. As last line of defense when I need to debug my rendering code, I use PIX. This extremely powerful tool comes with the DirectX SDK and can capture every single call to the GPU, including its internal state and even show the intermediate rendering results.

Sunday, February 20, 2011

Building Your Own Game Engine - Tools I

This article will focus on the tools that were developed and used during the making of the game. It turn out to be a rather large article so it's split in two posts. Basically a tool is everything that helps get the game done, like a level editor or a profiler. We used some external tool aside from the content creation applications like, Photoshop and After Effects. I will cover those after covering our own tools.

Managers
Before jumping to the different tools, I will first give a short overview of the managers the we used in the engine. There are certain services, routines or data that might be needed at any time in the execution of the game. File reading, resource loading and camera properties are good examples of such services. To make them available at all times we use managers, which are just simple singleton objects. At first we used static methods to wrap the actual object calls, making the calls a bit shorter, but we later decided for a more simple implementation. What follows is a short piece of the code from the camera class.
// Camera.cs
public class Camera : ISceneNode
{
   private static Camera instance;

   // Gets the instance of the camera; there can be only one.
   public static Camera Instance
   {
      get { return instance; }
   }

   public Camera(float camSizeX, float camSizeY, GameWorld world)
   {
      // ctor code ...
   }
   
   public void Initialize()
   {   
      instance = this;
   }

   // some camera code...
} 
There is only one instance of the singleton object that can be referenced. The parameters in the constructor give the dependencies order among the managers. This strategy was used for many of the tools that will be discussed next.

Debug Drawing
The very first tool that was implemented was the debug drawing. Textures might go missing, shaders might be buggy, triangles might get clipped, scaled, culled and not render at all. Developing a game in the dark can be extremely frustrating and inefficient, so this is where the debug drawing comes in to let us know exactly what is happening in the game. In a game engine usually drawing should only be done within a draw method and might need to set up many shading and rendering parameters. This is the exact opposite of what we need, so we made a debug draw that allows us to call drawing of colored lines from anywhere in the code. This can help visualize local variables like velocities, grids and such. Here is a short piece of code from the debug drawing manager.
// DebugDraw.cs
public class DebugDraw
{          
   protected static DebugDraw instance;

   // Needs a static instance to be able to answer static calls
   public static DebugDraw Instance
   { get { return instance; } }

   // lines
   public VertexPositionColor[] lines;
   public int linesCount;

   public DebugDraw(Game game)
   {
      graphicsDevice = game.GraphicsDevice;
      effect = new BasicEffect(game.GraphicsDevice, null);
      lines = new VertexPositionColor[80000];
      //
      linesCount = 0;
   }

   public void Initialize()
   {
      // Set static instance to this
      DebugDraw.instance = this;
   }

   public void DrawLine(Vector3 start, Vector3 end, Color color)
   {
      if (linesCount + 2 > lines.Length)
         return; // Silent error

      lines[linesCount++] = new VertexPositionColor(start, color);
      lines[linesCount++] = new VertexPositionColor(end, color);
   }

   public void Update(float dt)
   {               
      // Clear all out
      linesCount = 0;
   }

   public void Draw(float dt)
   {
      if (linesCount > 0)
         // call XNA line drawing ...
   }   
}
The approach is very simple; all the vertices are stored in a fixed array and the calls are accumulated into the array. During the render phase, lines are drawn with a single draw call, which is very efficient.  Circles and such are simply composed out of multiple line segments. Similarity, methods for drawing points were added too. With this in place we could visualize collision geometry, contact points, forces and similar stuff.
The next posts will discuss the profiler, level editor and some external tools, followed by a post on the water simulation and game play. So if you are curious, you can click the follow button in the top left  corner or subscribe to the feed with you favorite reader.

Tuesday, February 8, 2011

Building Your Own Game Engine - Basic Mechanisms

There are some game engine mechanisms that are needed in almost any game, and this article will cover exactly that. As I stated in the last article, the work on this common set of features started before we even knew what game we were making. While I have worked on large chunks of a complex system, this was my first time as a lead programmer and building an engine from scratch. So I consulted my favourite books, adapted some ideas from there to fit our needs and was ready to start.
At the very low level we were free from having to develop a memory management system, as we used C# and could let the garbage collection take care of that. Sometimes however the garbage collection can cause hiccups, so special care need to taken. The garbage collection runs fastest when it doesn't run at all, so we had almost no dynamic memory allocation after a level has been loaded. On way we tried to enforce this was to have our low level stuff like math, physics, collisions, render jobs and such as value types. Declaring a structure (struct) instead of a class in C# is not enough, as the garbage collection is a bit more complicated and sophisticated than that, so we made sure that variables are statically  scoped whenever possible. Additionally, we passed by reference all bigger structures to avoid copying memory around.
XNA is mostly a 3D rendering framework, so The 2D math that is part of the framework was in my opinion not a good match to what we needed for game. So, we implemented a 2D vector and a matrix class, as well as some helper methods for transformations. These can be lifted from any book in the field. Because we didn't do much rigid body physics or angular interpolation, there was no special class for rotations. In future I would prefer to use spinors (2D equivalent of quaternions) or complex numbers for the purpose. The game structure itself was kept very simple. Everything in the game was derived from GameEntity and all the entities were kept in the GameWorld. The game world had no hierarchy, just a list of level of entities. What follows is a piece of code from the GameWorld class and BaseGameEntity class that I will elaborate on.
// Gameworld.cs
public class GameWorld
{
   // some code ...

   // All the entities in the world
   public List<BaseGameEntity> Entities
   { get; set; }

   // Adds an entity to the world. This can be anything.
   public void AddEntity(BaseGameEntity entity)
   {
      AddQueue.Add(entity);
   }

   // Removes an entity from the world
   public void RemoveEntity(BaseGameEntity entity)
   {
      RemoveQueue.Add(entity);
   }

   public void Update(float dt)
   {
      // Add entities
      foreach (BaseGameEntity bge in AddQueue)
         Entities.Add(bge);
      AddQueue.Clear();
      
      // Update entities
      foreach (BaseGameEntity bge in Entities)
         if(bge != null)
            bge.Update(dt);

      // Remove entities
      foreach (BaseGameEntity bge in RemoveQueue)
      {
         // Call cleanup on the entity
            if (bge is IDisposable)
               (bge as IDisposable).Dispose();
            Entities.Remove(bge);
      }
   }
   // more code ...
}

// BaseGameEntity.cs
[Serializable()]
public class BaseGameEntity
{
   // Refence to the world
   protected GameWorld world;

   // Unique ID of the enitity.
   [XmlAttribute]
   public int ID
   {
      set {
         this.id = value;
         // Make sure their are unique
         if (id >= nextValidID)
            nextValidID = id + 1;
      }
      get { return id; }
   }
   //each entity has a unique ID
   private int id;

   // Every entity has a type associated with it (health, troll, ammo etc)
   public EntityType Type
   { get; set; }
   
   // Called when loaded
   public virtual void Initialize(GameWorld gameworld)
   {
      // init code ...
   }

   // some code ...
}

The flat structure and very simple sequential update are not very flexible, an interactive character on a moving platform might be difficult to implement for example, but was sufficient for our needs. Additionally this approach can be memory coherent, which is very good for cache, which is very important performance. The update method of each entity handles everything from physics and game-play to animation. I opted for simple floating point time in seconds as parameter instead of the XNA's GameTime structure as I it doesn't require every class to calculate time, which can bring inconsistencies, and it allows for easy time manipulation like bullet time or pause. So that there is no inconsistency during the execution of a single world update, entities could only be added and removed at the begging and end of it. Queues of such entities are kept for this reason.
From the piece of code from the BaseGameEntity class we can see few things.
  • Every entity has a reference to the game world and that is how it communicates with other entities and can be aware of its soundings.
  • A unique ID is kept to help identify the entities. This ID is persistent and stored in the level.
  • We have a game entity type for different things in the level, mostly used for game-play and collisions. This is different from the .NET object type, as there can be many different enemies but they are all of the same class.
  • We are using the .NET framework XML serializing support for writing and reading data, so our executable classes are also our data containers. This can be seen from the many annotations throughout the code.
This last bullet point was not all that straight forward to implement as it may seem. Firstly, the game scene has a general graph structure, while an XML file has a hierarchical structure and cannot represent the scene properly. Secondly, when reading an object from an XML file, the no arguments constructor is always called. These are remedied by calling an initialization method on all entities after streaming, that has a similar role to a constructor. For cross-reference unique entity IDs are stored in the XML and during the initialization references to the actual objects are obtained by querying the world.
When a level grows to contain couple of hundred entities, it's obvious that you would like to change properties in bulk, and that is where a settings file comes in. This is just another XML file which can be referenced by an entity. We decided settings to be strongly typed so settings for a jelly are different from settings for a blow fish even if the contents are the same. Settings are implemented as simple classes so they can be inherited when needed. So if a MovingEntity extends BaseGameEntity, MovinEntitySettings can extend BaseGameSettings, simplifying the maintenance. The settings are applied in the Initialize method and all entities have a GetSettings and SaveSettings methods. The classes extending the entity class must handle all the details of saving the settings, as it's the only class that knows the type of the settings class. This approach puts a bit of extra work for every type that needs to have settings, but it's flexible and can even allow for structured settings, like a blowfish settings having different sprite settings for all the animations.
These settings files might need to be read frequently during the game, as each created rocket or a bullet might need their settings for example. Reading from disk can do some bad hiccups and there is the added slowdown of parsing the file, allocating memory. For this purpose we cache the settings, which a usually very small data structures and more over, are shared among entities with the same settings. We simply use the file name as a key in a dictionary. The same caching is in fact used for all resources to avoid duplicate data in the game. Our resource manager handles this internally, so it's completely transparent for the rest of the code.
There is always room for improvement, so I'll mention few things. For a future project I might opt for a different streaming solution or make macros that do most of the tedious work for me, but the overall performance and flexibility during the development was quite good. We had a inconsistent way of using the entity types, which lead to some funky bugs. A more structured update method that can handle different priorities and different logical steps might also be beneficial.
The next article will be on tools and the world editor.

Sunday, February 6, 2011

Building Your Own Game Engine - Introduction

Fast forward from my last post, the game that I was making with students from the Utrecht School of Arts (HKU) for the Dutch Game Garden is done. At least within the project frame that was given by the school. We decided for "The Jelly Reef" as a name and you can read more at www.thejellyreef.com. The game has been showcased on few occasions and has received some very positive reactions. This is the first in a series of six articles on how the game was developed. The articles will cover some details  of the game's engine structure and all the technical choices made along the way. The next articles will focus on Basic Structure, Tools, Game Play, Physics and Collision Detection, and Graphics. The articles should be informative to any small team trying to develop a game, as much of the patterns reappear again and again in very different types of games. First, a screen-shot can give you an idea about The Jelly Reef and there is a montage at the end of the article.

Before there was any game, there was the team and the clients request, so many of the initial decisions came from there. The team was 9 peoples strong; 3 programmers, 4 artist and 2 designers. Two of the programmers on the project, including myself were part time and the third one had a background in interaction design with little experience in programming. All the artist on the project had background in 2d art. For the vast majority of the project, design was handled by the lead designer, as the other designer was perfecting the art of slacking. We knew that we were working with the Microsoft Surface, so technical choice number one was the choice of language, C# and was done for us. Because of performance reasons mentioned in the previous post, we decided to go for XNA instead of Silverlight. With the background of our art guys we knew that we would be making a 2d game and the top down perspective of the device reinforced the decision. While we did have the option to use a game engine like Unity, we didn't think that it would be a good fit the projects needs, so we decided to build our own.
For the first three weeks the team was brainstorming on possible concepts. Time is always in short supply so the tech team was eager to start working on something, to avoid sleepless nights at the end of the project. We discarded some of the concepts and merged features from similar concepts. From the handful of distilled concepts, we took a common denominator and started working on that. This is exactly what the next article will discuss.