Sunday, February 20, 2011

Building Your Own Game Engine - Tools I

This article will focus on the tools that were developed and used during the making of the game. It turn out to be a rather large article so it's split in two posts. Basically a tool is everything that helps get the game done, like a level editor or a profiler. We used some external tool aside from the content creation applications like, Photoshop and After Effects. I will cover those after covering our own tools.

Managers
Before jumping to the different tools, I will first give a short overview of the managers the we used in the engine. There are certain services, routines or data that might be needed at any time in the execution of the game. File reading, resource loading and camera properties are good examples of such services. To make them available at all times we use managers, which are just simple singleton objects. At first we used static methods to wrap the actual object calls, making the calls a bit shorter, but we later decided for a more simple implementation. What follows is a short piece of the code from the camera class.
// Camera.cs
public class Camera : ISceneNode
{
   private static Camera instance;

   // Gets the instance of the camera; there can be only one.
   public static Camera Instance
   {
      get { return instance; }
   }

   public Camera(float camSizeX, float camSizeY, GameWorld world)
   {
      // ctor code ...
   }
   
   public void Initialize()
   {   
      instance = this;
   }

   // some camera code...
} 
There is only one instance of the singleton object that can be referenced. The parameters in the constructor give the dependencies order among the managers. This strategy was used for many of the tools that will be discussed next.

Debug Drawing
The very first tool that was implemented was the debug drawing. Textures might go missing, shaders might be buggy, triangles might get clipped, scaled, culled and not render at all. Developing a game in the dark can be extremely frustrating and inefficient, so this is where the debug drawing comes in to let us know exactly what is happening in the game. In a game engine usually drawing should only be done within a draw method and might need to set up many shading and rendering parameters. This is the exact opposite of what we need, so we made a debug draw that allows us to call drawing of colored lines from anywhere in the code. This can help visualize local variables like velocities, grids and such. Here is a short piece of code from the debug drawing manager.
// DebugDraw.cs
public class DebugDraw
{          
   protected static DebugDraw instance;

   // Needs a static instance to be able to answer static calls
   public static DebugDraw Instance
   { get { return instance; } }

   // lines
   public VertexPositionColor[] lines;
   public int linesCount;

   public DebugDraw(Game game)
   {
      graphicsDevice = game.GraphicsDevice;
      effect = new BasicEffect(game.GraphicsDevice, null);
      lines = new VertexPositionColor[80000];
      //
      linesCount = 0;
   }

   public void Initialize()
   {
      // Set static instance to this
      DebugDraw.instance = this;
   }

   public void DrawLine(Vector3 start, Vector3 end, Color color)
   {
      if (linesCount + 2 > lines.Length)
         return; // Silent error

      lines[linesCount++] = new VertexPositionColor(start, color);
      lines[linesCount++] = new VertexPositionColor(end, color);
   }

   public void Update(float dt)
   {               
      // Clear all out
      linesCount = 0;
   }

   public void Draw(float dt)
   {
      if (linesCount > 0)
         // call XNA line drawing ...
   }   
}
The approach is very simple; all the vertices are stored in a fixed array and the calls are accumulated into the array. During the render phase, lines are drawn with a single draw call, which is very efficient.  Circles and such are simply composed out of multiple line segments. Similarity, methods for drawing points were added too. With this in place we could visualize collision geometry, contact points, forces and similar stuff.
The next posts will discuss the profiler, level editor and some external tools, followed by a post on the water simulation and game play. So if you are curious, you can click the follow button in the top left  corner or subscribe to the feed with you favorite reader.

Tuesday, February 8, 2011

Building Your Own Game Engine - Basic Mechanisms

There are some game engine mechanisms that are needed in almost any game, and this article will cover exactly that. As I stated in the last article, the work on this common set of features started before we even knew what game we were making. While I have worked on large chunks of a complex system, this was my first time as a lead programmer and building an engine from scratch. So I consulted my favourite books, adapted some ideas from there to fit our needs and was ready to start.
At the very low level we were free from having to develop a memory management system, as we used C# and could let the garbage collection take care of that. Sometimes however the garbage collection can cause hiccups, so special care need to taken. The garbage collection runs fastest when it doesn't run at all, so we had almost no dynamic memory allocation after a level has been loaded. On way we tried to enforce this was to have our low level stuff like math, physics, collisions, render jobs and such as value types. Declaring a structure (struct) instead of a class in C# is not enough, as the garbage collection is a bit more complicated and sophisticated than that, so we made sure that variables are statically  scoped whenever possible. Additionally, we passed by reference all bigger structures to avoid copying memory around.
XNA is mostly a 3D rendering framework, so The 2D math that is part of the framework was in my opinion not a good match to what we needed for game. So, we implemented a 2D vector and a matrix class, as well as some helper methods for transformations. These can be lifted from any book in the field. Because we didn't do much rigid body physics or angular interpolation, there was no special class for rotations. In future I would prefer to use spinors (2D equivalent of quaternions) or complex numbers for the purpose. The game structure itself was kept very simple. Everything in the game was derived from GameEntity and all the entities were kept in the GameWorld. The game world had no hierarchy, just a list of level of entities. What follows is a piece of code from the GameWorld class and BaseGameEntity class that I will elaborate on.
// Gameworld.cs
public class GameWorld
{
   // some code ...

   // All the entities in the world
   public List<BaseGameEntity> Entities
   { get; set; }

   // Adds an entity to the world. This can be anything.
   public void AddEntity(BaseGameEntity entity)
   {
      AddQueue.Add(entity);
   }

   // Removes an entity from the world
   public void RemoveEntity(BaseGameEntity entity)
   {
      RemoveQueue.Add(entity);
   }

   public void Update(float dt)
   {
      // Add entities
      foreach (BaseGameEntity bge in AddQueue)
         Entities.Add(bge);
      AddQueue.Clear();
      
      // Update entities
      foreach (BaseGameEntity bge in Entities)
         if(bge != null)
            bge.Update(dt);

      // Remove entities
      foreach (BaseGameEntity bge in RemoveQueue)
      {
         // Call cleanup on the entity
            if (bge is IDisposable)
               (bge as IDisposable).Dispose();
            Entities.Remove(bge);
      }
   }
   // more code ...
}

// BaseGameEntity.cs
[Serializable()]
public class BaseGameEntity
{
   // Refence to the world
   protected GameWorld world;

   // Unique ID of the enitity.
   [XmlAttribute]
   public int ID
   {
      set {
         this.id = value;
         // Make sure their are unique
         if (id >= nextValidID)
            nextValidID = id + 1;
      }
      get { return id; }
   }
   //each entity has a unique ID
   private int id;

   // Every entity has a type associated with it (health, troll, ammo etc)
   public EntityType Type
   { get; set; }
   
   // Called when loaded
   public virtual void Initialize(GameWorld gameworld)
   {
      // init code ...
   }

   // some code ...
}

The flat structure and very simple sequential update are not very flexible, an interactive character on a moving platform might be difficult to implement for example, but was sufficient for our needs. Additionally this approach can be memory coherent, which is very good for cache, which is very important performance. The update method of each entity handles everything from physics and game-play to animation. I opted for simple floating point time in seconds as parameter instead of the XNA's GameTime structure as I it doesn't require every class to calculate time, which can bring inconsistencies, and it allows for easy time manipulation like bullet time or pause. So that there is no inconsistency during the execution of a single world update, entities could only be added and removed at the begging and end of it. Queues of such entities are kept for this reason.
From the piece of code from the BaseGameEntity class we can see few things.
  • Every entity has a reference to the game world and that is how it communicates with other entities and can be aware of its soundings.
  • A unique ID is kept to help identify the entities. This ID is persistent and stored in the level.
  • We have a game entity type for different things in the level, mostly used for game-play and collisions. This is different from the .NET object type, as there can be many different enemies but they are all of the same class.
  • We are using the .NET framework XML serializing support for writing and reading data, so our executable classes are also our data containers. This can be seen from the many annotations throughout the code.
This last bullet point was not all that straight forward to implement as it may seem. Firstly, the game scene has a general graph structure, while an XML file has a hierarchical structure and cannot represent the scene properly. Secondly, when reading an object from an XML file, the no arguments constructor is always called. These are remedied by calling an initialization method on all entities after streaming, that has a similar role to a constructor. For cross-reference unique entity IDs are stored in the XML and during the initialization references to the actual objects are obtained by querying the world.
When a level grows to contain couple of hundred entities, it's obvious that you would like to change properties in bulk, and that is where a settings file comes in. This is just another XML file which can be referenced by an entity. We decided settings to be strongly typed so settings for a jelly are different from settings for a blow fish even if the contents are the same. Settings are implemented as simple classes so they can be inherited when needed. So if a MovingEntity extends BaseGameEntity, MovinEntitySettings can extend BaseGameSettings, simplifying the maintenance. The settings are applied in the Initialize method and all entities have a GetSettings and SaveSettings methods. The classes extending the entity class must handle all the details of saving the settings, as it's the only class that knows the type of the settings class. This approach puts a bit of extra work for every type that needs to have settings, but it's flexible and can even allow for structured settings, like a blowfish settings having different sprite settings for all the animations.
These settings files might need to be read frequently during the game, as each created rocket or a bullet might need their settings for example. Reading from disk can do some bad hiccups and there is the added slowdown of parsing the file, allocating memory. For this purpose we cache the settings, which a usually very small data structures and more over, are shared among entities with the same settings. We simply use the file name as a key in a dictionary. The same caching is in fact used for all resources to avoid duplicate data in the game. Our resource manager handles this internally, so it's completely transparent for the rest of the code.
There is always room for improvement, so I'll mention few things. For a future project I might opt for a different streaming solution or make macros that do most of the tedious work for me, but the overall performance and flexibility during the development was quite good. We had a inconsistent way of using the entity types, which lead to some funky bugs. A more structured update method that can handle different priorities and different logical steps might also be beneficial.
The next article will be on tools and the world editor.

Sunday, February 6, 2011

Building Your Own Game Engine - Introduction

Fast forward from my last post, the game that I was making with students from the Utrecht School of Arts (HKU) for the Dutch Game Garden is done. At least within the project frame that was given by the school. We decided for "The Jelly Reef" as a name and you can read more at www.thejellyreef.com. The game has been showcased on few occasions and has received some very positive reactions. This is the first in a series of six articles on how the game was developed. The articles will cover some details  of the game's engine structure and all the technical choices made along the way. The next articles will focus on Basic Structure, Tools, Game Play, Physics and Collision Detection, and Graphics. The articles should be informative to any small team trying to develop a game, as much of the patterns reappear again and again in very different types of games. First, a screen-shot can give you an idea about The Jelly Reef and there is a montage at the end of the article.

Before there was any game, there was the team and the clients request, so many of the initial decisions came from there. The team was 9 peoples strong; 3 programmers, 4 artist and 2 designers. Two of the programmers on the project, including myself were part time and the third one had a background in interaction design with little experience in programming. All the artist on the project had background in 2d art. For the vast majority of the project, design was handled by the lead designer, as the other designer was perfecting the art of slacking. We knew that we were working with the Microsoft Surface, so technical choice number one was the choice of language, C# and was done for us. Because of performance reasons mentioned in the previous post, we decided to go for XNA instead of Silverlight. With the background of our art guys we knew that we would be making a 2d game and the top down perspective of the device reinforced the decision. While we did have the option to use a game engine like Unity, we didn't think that it would be a good fit the projects needs, so we decided to build our own.
For the first three weeks the team was brainstorming on possible concepts. Time is always in short supply so the tech team was eager to start working on something, to avoid sleepless nights at the end of the project. We discarded some of the concepts and merged features from similar concepts. From the handful of distilled concepts, we took a common denominator and started working on that. This is exactly what the next article will discuss.

Sunday, November 7, 2010

Developing for the Microsoft Surface

My plan for the current semester was definitely to the try out some touch devices, which is why I bought an iPod touch. But little did I know that I would be developing for the Microsoft Surface. I am working with the Utrecht School of Arts (HKU) on a project for the Dutch Game Garden, and so far looks quite promising.
The Microsoft Surface has a huge area that can detect and track an almost infinite amount of fingers. Ok, not infinite, but more than it makes sense to have on it at any given time. Because it uses image processing and not touch, it has some more tricks up the sleeve. For example, tags recognition, markings that you can print or stick to any object and track it on the surface. As well as raw image processing if needed. The SDK is .NET based, so you get to access the .NET Framework and you can use both Silverlight and XNA.
On the other, the machine is hugely underpowered and the GPU is especially ridiculous, considering the $12000 or so price tag. Most of the demos and samples had a reaction lag unacceptable for most game scenarios, which did make me a bit worried. On this particular I get to be the engine architect, so I decided to start from zero and build an engine based on XNA. As it turns out, the lag in the samples actually came fro the way Silverlight processes events, so the XNA games runes smoothly on 60 fps. Leveraging on the power of C# (using reflections, serializing and a some generics abuse) in under two month we have developed and a system that has an in-game world editor, flexible gameplay elements, collision detection with a bit of physics and even our own profiler. In the last phase we will of course inevitable see the ugly side of C# when we'll need to optimize the whole thing to keep running on 60 fps.
The device also brought to attention some interesting design challenges, like which point of view to take so that from no angle the game looks upside down. And that thing should be addressed next week, hopefully just in time for me to report on it. In meantime you can check this video of some of the earlier prototypes.

Sunday, October 10, 2010

GDC Europe Retrospective


It took me some time to write few words for the GDC Europe and not because there wasn't much said on the conference. On the contrary there was so much material, that I needed some time to digest it and put in into the right perspective.
Me being in general on the technical side, I tried to catch as many of the technical lectures as possible and some were very insightful. I'll pick three of them that stuck to my mind and give a short overview.
Eric Chahi from Ubisoft gave a lecture on High-Performance Simulation, by going through the details of their Project Dust. Now named "From Dust", it is a god game where you have the ability to control a dynamic world shaped by the elements. The game achieves interactive simulation of earth, water and wind in the world, by subdividing it into a grid. Each grid cell is simulated separately, using the neighboring cells as inputs. This idea is often used weather prediction models, but what makes the current implementation is the extreme performance optimization done, so that it can run smoothly on consoles. All the data and code per cell are structured so that they fit in 256KB, so all calculations can be done on a Playstation's SPU's or in the case of the Xbox 360 without a single cache miss.
Michael Drobot from Reality Pump Studios gave a lecture on Advanced Material Rendering, which was mainly a collection of dirty tricks on Deferred Shading. Many of them used the long forgotten technique of dithering. Memorable lecture was also given by Mathew Rubin form Black Rock Studio, the focus of which was the “in studio” pipeline, which allows designers to construct and test their levels in shortest time possible. The topic of most of the lectures was quite narrow, but most importantly it gave a glimpse on how deep are developers prepared to dive to create next generation of games and supply their designers and artists with the best possible tools.
The lectures by Crytek and Autodesk were slightly disappointing in comparison and were used more as a showcase for their products than technology lectures. If I would have been a game producer making a choice which technology to license, the lectures would have been more helpful; but I am not.
The real jewel of the conference however is the mix of disciplines that it includes, so I later shifted my focus towards the fields less known to me, like game design, production and even marketing. The ideas presented were extremely helpful often timeless, as good design doesn’t go bad. Which is more than can be said for rendering techniques. Warren Spector’s keynote on the relation of games to other media has been a guideline for most of my recent game design decisions. The lecture by Louis Castle on how to survive the industry was least to say an inspirational one.
In conclusion, I was a great experience to be part of the game-making world. I would love to be there again, and next year with a project to showcase.

Monday, September 20, 2010

Going all Apple


After quite some years sitting comfortably in the PC camp, last month I decided to go Apple all the way. And by PC user I mean Microsoft Certified Trainer, DirectX and .NET developer and an avid gamer. Now what makes a man turn Mac and fork money for computer that is being sold in the lobby of a fashion store here in Utrecht?
  • It’s the only way to develop for the iPhone/iPod touch.
  • It’s shinny.
  • And finally it’s the only way to develop for the iPhone/iPod touch.
Obviously strong reasons…so I bought an iMac and got an iPod touch for (almost) free. After few weeks of using it all I can say is, I love it! Snow Leopard runs extremely smooth and so does Window 7. The bright LED screen blows my 22” Samsung LCD out of the water. It runs whisper quiet even when under heavy load. The sound signal doesn’t have interference noise from the hard drive or any other hardware component, unlike my previous not-so-cheap HP. The ergonomics are great (except for the silly location for the stereo jack!) and the drivers for my Wacom Ituos4 tablet seem to be more stable, allowing me to use hardware rendering in Photoshop. Time Machine is the best backup solution I have used, easy to set up while still giving you all needed options. Compatibility is no issue as it runs Microsoft Office (in fact did from the 80’s) and easily shares any media with my Xbox 360.

However few things are less then perfect. Graphics performance is below my expectations. Both Maya and Portal ran slower then under Windows and on top of that Portal shows screen tearing and flickering. Using vertical sync didn’t alleviate the problem either. I am not sure if it is the GPU drivers, the OpenGL implementation or the game itself, but needed my Windows 7 installation anyhow. How else I am going to run Visual Studio, my all time favorite Microsoft product (after the Xbox steering wheel :) )

[Non-geeks can skip this paragraph]
And this takes me to developing on the Mac, which why I have it in the first place. Xcode in my opinion is not as slick as Visual Studio. It is not always easy to find the window that you need without using Exposé. Alt-Tab is less then useful as all windows of the same application are bundled together, so after getting to your application you need Cmd + ~ to cycle through the windows. On the other hand the excellent code editor stays out of your way and the simple code completion is better than IntelliSense for C/C++, which is not too hard to beat. The language of choice (or lack of one) for developing applications for the Mac and iOS (iPhone/ iPod/iPad) and hence Xcode, is Objective-C. Objective-C has syntax that can make C++ look elegant and C# a wet dream. The language does have some great features like categories, allowing for extending the functionality of an existing class without creating a new one but the memory model is somewhat scary for game developers. It has all the performance nausea of a garbage collection with none of its benefits.

Is it all worth to have your game on the small screen and potentially thousands of people enjoying it? Hell yeah! So be on the lookout for some handcrafted pixels hitting the App store.

Thursday, August 26, 2010

Gamescom 2010

Thanks to the fine people of Task Force Innovatie Utrecht I got to see this year’s gamescom and GDC Europe. And what I got was more information on games than one can handle, so I’ll try to put what stuck to my mind most into few paragraphs, combined with some photos.

I’ll start with the organization of the gamescom fair, which was excellent. The daily ticket also included a train ticket for the same day, combined with the city’s train system, made getting to the fair a breeze. Quarter of million people flooded the Cologne exhibition center, yet I never felt that it was too crowded. However, I did have to wait in line for quite a bit in front the booths of most games, except the teen rated ones.

The first thing I noticed; German players seem to be big on the MMO games and games featuring dragons and monsters in general, so naturally plethora of those were being showcased. My personal favorite was Torchlight II. It has the same unique visual style of the first game, while removing many of the annoyances of its predecessor. Moving on to the next hall...


Two trends that were predominant on the fair and can give a glimpse of what is in store for the next year or so, were motion controllers and 3D. Nintendo has been ruling the sales with the Wii, largely due to its motion controller, the Wiimote. This holiday season, both Kinect for the Xbox 360 and Move for the PlayStation are coming out. The two new devices and completely different and both have their pros and cons.
















The PlayStation Move is a more traditional controller and somewhat similar to the Wii remote. The main controller has the usual PlayStation buttons including a trigger. It tracks motion and rotation over all axes independently and it feels quite accurate but in some games there is an annoying lag between the input move and the reaction on screen. Most of the games shown that were using the new controller were casual games made especially to take advantage of motion input. My personal favorite was on on-rail shooter that actually had similar gameplay to Duck Hunt for the Nintendo Entertainment System (NES) from 1984. On the other hand, the serious games using the PlayStation Move, like Killzone 3 and SOCOM 4, worked perfectly fine without it.


The Xbox 360’s Kinect takes a whole different approach by removing the controller all the way. The Kinect allows the Xbox 360 to do a full motion capture of two players in real-time, using a whole array of sensors including an infrared projector. We have all seen that after some time with the Wiimote everyone learns how to fake a move and do it with least amout of effort . Kinect on the other hand tracks the movement of the whole body in space, so you really have to jump around to make things happen in the game. I had lots of fun playing Kinect Adventures!, Kinect Sports and even Dance Central; and if you have seen me dance you would know I not exacltythe dancing type. All of the Kinect demos were done in a protected bubble, so my main concern is the ability if the sensor to handle distractions and noise. There could be people moving in the background, dog running if front of you legs or simply having a table in the living room.






3D was a big topic on the GDC and lots of games were shown in 3D later at the gamescom; even Halo Reach was playable in 3D. Crysis looked great in 3D and the effect had been carefully balanced to add to gameplay rather than just serving as an eye-candy. Sony bring full support for 3D TV screens to the PlayStation over the HDMI 1.4 standard. But even with Nvidia’s crystal clear 3D Vision my eyes get tired and headache might follow. So I'll just say that I can’t wait to see Wipeout HD in 3D. I bet this time around they can bring up seizures even to people without epilepsy.

Regardless of the upcoming technical novelties coming in the next year, I am personally most exited about upcoming games that focus on fun gameplay, like Portal 2, Little Big Planet 2 and hopefully and plethora of good Indie games like Limbo and Burn Zombie Burn!