08 Mar 2020

Introduction

In this article I reflect on my experience during the creation of a custom 3D engine in MonoGame in a chronological order. It starts with the very beginnings of my experience with 3D and ends with the state of the engine as of the publication date of this post. Since the engine is not finished yet, I will probably be writing more articles about it once development progresses.

During the writing of this article, I've found that doing this really helps me get a better overview of what I'm doing exactly, and has even contributed to solving existing problems in the engine itself. I also think it's a great and fun use for documentation.

First things first; how did it start? I've been wanting to create a 3D game since the summer of 2016. Back then, all my games were 2D games made with GameMaker. While creating 3D games in GameMaker is possible, it didn't seem ideal to me. I wanted to use C#, but I didn't like the idea of having to learn an entire new IDE, such as Unity. Besides that, I wanted as much control as possible, so I figured writing my own engine in C# was the best choice. Luckily, there's something called the MonoGame framework, which is an open-source and cross-platform reimplementation of the abandoned Microsoft XNA. This seemed perfect; it provided the very basics I needed without offering its own IDE. It's all code.

People have asked me why I didn't go with C++ instead, since game engines are pretty low-level and C# is considered to be a high-level programming language (even though you do have access to various low-level functionality). To summarize things; I already had experience with the more modern language C#, and I simply didn't like the idea of working in an unmanaged environment (no garbage collection, type safety, memory management, portability, etc.). Besides performance, I don't think there really is an advantage of using C++ these days. Almost all features in C++ exist in C# in some way, and if you know how to write efficient code, and how to not abuse the garbage collector, I don't even think it makes that much of a difference. People have written a lot of amazingly efficient low-level libraries that are pure C#.

I can't say I have much experience with C++, but I've only ever had the feeling I missed out on one of C++'s features once, which was multiple type inheritance. I later realized the whole concept of inheriting from multiple types seems quite overkill and adds unnecessary complexity to code, and you can get around that problem in C# simply by using object composition. Also, since the release of C# 8 you can actually use some sort of multiple type inheritance through interface implementations, which seems quite odd to me still, but I'll probably dive into it eventually. As of now, the engine is still on .NET Framework anyway, so I can't use interface implementations as they are not supported in the .NET CLR.

Anyway, let's get to it.

Writing a 3D engine in MonoGame

It began during the very first few days of 2017. I already had a couple months of experience with C#; I used it at school and during my first internship, and I quickly began loving the language. My internship was coming to an end, and I wasn't as busy anymore, so one day when I got home I installed MonoGame and simply began coding. I followed tutorials and read articles about basic 3D functionality, such as vectors, projection matrices, quaternions, and all that stuff. I had never even heard of most of those terms, so I had a lot of new things to learn.

04 Feb 2017
One of the first versions of the engine included a camera, very basic player movement (no collisions, just a hardcoded ground value to fake jumping), and models. The lighting is provided by the BasicEffect class in MonoGame.

After setting up the very basics of the engine, such as rendering, input handling, skyboxes, etc., as well as creating models using a modeling program called Wings3D, I began looking into collision detection. I realized this was going to be difficult, but it turned out to be a real challenge. Eventually after a couple months, with help from my dad and his old 3D engine he wrote in C++ around the year 2000, I had my own collision detection. It wasn't fast nor accurate, and didn't handle collisions for scaled models properly. Other than that, it worked fine mostly.

02 Jul 2017
Collision debugging in action

At this point I had a pretty basic engine. It read levels from plain text files, like a list of layer files each having a grid of objects represented by characters. Here is a small example:

------------------------
XXXXXX---------------X--
XXXXXX---------------X--
XXXXXX---------------X--
---------------------X--
---------------------X--
--------XX-----------X--
---------------------X--
--------XX--XXXXXXXXXX--
------------------------
--------XX--------------
------------------------

This worked well for testing, but of course wasn't very convenient when creating actual levels.

24 Jul 2017
A level in the final engine

Eventually though, I decided it was time to step out of the testing phase and build an actual engine.

Rewriting the 3D engine

This time I set up a separate project for the engine itself, and another one for a test game.

I also rewrote a bunch of features, reading levels was now done with XML files, which I would later abandon again in favor of JSON.

04 Aug 2017
Here's one of the first recordings from a few days after I began rewriting the engine from scratch again. The reason it looks so blank is because I removed MonoGame's default BasicEffect lighting.

Another thing I was particularly interested in was shading. I started looking into the language HLSL, and came up with a point light shader after a while. The next month I built my own level editor, allowing me to create levels inside the game. I didn't have my own UI system yet, so it was done exclusively using the keyboard.

28 Sep 2017
Early version of the level editor and some point lighting
06 Feb 2018
Combined point lights

After some more months, I realized I didn't actually want to make use of some of the base components of MonoGame, more specifically; the part that handles the update and render cycles. There were some significant limitations to it that I wanted to avoid in the future, so I implemented my own update and render methods, and separated my game object class hierarchy from MonoGame's GameComponent class.

Textures

At this point, you're probably seeing there is something crucial missing. Something I couldn't get working for quite a while... Model texturing. I don't really know why I began implementing such an important feature this late, but I do remember I was running into annoying cryptic error messages from the shader compiler (SharpDX.SharpDXException: 'HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.), which probably made me postpone this feature for a while. (Turns out all I had to do in order to fix that error was include something as simple as GraphicsProfile = GraphicsProfile.HiDef; in the constructor of the GraphicsDeviceManager...) Another reason was probably that I found working with UV maps in 3D modeling software to be fairly difficult. Even after I got texturing itself working, I didn't mess with UV mapping until a couple months later.

After implementing the textures, they appeared blurry when upscaled. I wasn't sure how to get this working properly in my shader, still being fairly unfamiliar with HLSL and GPU programming in general. I gave up on trying to fix it, and instead implemented a post processing effect to posterize the colors. For example, consider the following 16x16 texture:

Original
Blurred by shader
Posterized

In order to posterize colors you need to apply something like this simple function to the pixel shader:

return floor(finalColor * posterize) / posterize;

finalColor being a float3 or float4 (saturated between 0 and 1), and posterize being a float. The higher the posterize value, the more different colors will be rendered.

25 Mar 2018
Turns out this looked pretty cool combined with lighting.

Test games

Having worked on this engine for over a year (though not consistently), one day I thought to myself: "Hmmm, maybe I should actually create a game with the engine." I came up with an idea for a little speedrunning parkour game. It ended up being quite enjoyable to be fair, aside from the odd collision handling. In this short video you can actually see me abuse the collision system in order to jump higher. I also recreated some key features that seem present in a lot of first-person 3D games, such as bunnyhopping and strafing.

13 Mar 2018
A level where you need to get to the end as fast as possible

Later I started developing some more different test games, but this one is now gone for good. (Could always use the time machine (Git) though.)

Unnecessary complexities

Around April 2018, I realized some of the code had kind of exploded, and become unmaintainable. Even after merging the engine and game back together, I realized the very core of the engine itself was set up inconveniently. I also had an overly complex system of recursively attaching models to other models. If I were to implement this feature in the level editor, it would be a pain to set up. To make things worse, I had a generic system of attaching behaviors to those models. These behaviors sometimes conflicted with each other when attaching multiple of them to a single model. As a workaround I simply combined the mathematical aspects of two conflicting behaviors into a single behavior every time I needed both of them, but this was far from ideal.

28 Mar 2018
Here's an example of a recursively initialized model chain, combined with their own transformations and behaviors.

And here's the XML for those very same two models. Note how there is a <ModelObject> inside another <ModelObject>.

<ModelObject position="-31,10,-215" scale="1,1,1" rotation="0,0.7071068,0,-0.7071068" model="Wheel" texture="SkyboxSky" color="1,1,1,1">
	<Light position="0,0,0" scale="1,1,1" rotation="0,0,0,1" color="1,1,0,1" intensity="2" radius="2" specularPower="0" randomness="0.025" />
	<Sound3D position="0,0,0" scale="1,1,1" rotation="0,0,0,1" fileName="ObsidianMono" loop="True" volume="1" />
	<ModelObject position="0,0,0" scale="1.5,0.5,1" rotation="0,1,0,-4.371139E-08" model="Mushroom" texture="A" color="-7.450581E-08,0.9,0.5999999,1">
		<Light position="0,0,0" scale="1,1,1" rotation="0,0,0,1" color="1,0,1,1" intensity="2" radius="1" specularPower="0" randomness="0.05" />
		<TransformationBehavior type="Rotate" angle="0" angleIncrement="0.2" axis="0,1,0" />
	</ModelObject>
	<TransformationBehavior type="FloatRotate" floatSpeed="1.5" floatAmount="0.5" angle="0" angleIncrement="0.5" axis="0,1,0" />
</ModelObject>

Pretty cool, but why would I ever need to build levels using models attached to other models, attached to other models, that may even be attached to yet more models, all with their own inherited behaviors? Sure, this works for a car and its wheels for instance, but I wouldn't actually build a car inside the level editor; it would make much more sense if something like that was just a programmed game object instead.

So, all of that recursion and inheritance turned out to be rather unnecessary and difficult to work with, especially regarding the level editor. So after the last small changes were done, such as using JSON instead of XML for the levels, I was convinced I needed to rewrite the engine, again.

Rewriting the 3D engine again

About a month later, in late May, I started rewriting. This time I wanted set it up correctly. I created a blank project and slowly started taking pieces from the old code base, putting them in the new project, and rewriting them if necessary. I simplified a lot of things by removing a bunch of unnecessary overkill features, such as the ones I talked about earlier. I also finally implemented proper UV mapping for model textures.

20 Jul 2018
The first working UV maps, applied to the pillars

For a good while I played around with more game concepts, and slowly but surely continued improving the level editor. Here's a couple videos:

23 Jul 2018
Here's another game concept I came up with. WARNING: This is the first video with sound.
25 Jul 2018
The level editor
25 Jul 2018
Many lines

At this point I was basically working on the engine for multiple hours every day. You could actually piece together my entire sleep schedule by just looking at the project's Git commit history, which seemed to be fairly normal compared to previous summer where I had commits ranging between 2 and 5 AM all over the place... (Programming, or attempting to be productive at night usually isn't very efficient, so I'm glad I stopped doing that... (It is absolutely not 1:12 AM as I'm writing this.))

Eventually though, I knew I had to focus on fixing my collisions. There were still some small bugs regarding player movement, and it was too slow. The player bugs seemed to be rather random, such as falling through the world at certain angles or speeds. These were difficult to track down, but after some tweaking they were mostly fixed.

Performance was another big issue. Since I was using a really large single model for my levels at the time, and relying on brute-forcing through vertices with my collision detection; the simple method of implementing early rejection for a collision where two bounding spheres do not overlap didn't work well, as the bounding sphere for the world model would be huge. I thought of splitting the world model in smaller parts at runtime, but eventually I just gave up on it.

One of the very huge world models displayed in Wings3D

One major improvement for the collision response handling was done using model inflation. Instead of giving the player some sort of radius to test against vertices, I would just blow up all models during startup and test those against the player's position only; a point rather than an actual shape. This removed the need for the hacky method of slowly pushing the player away if it penetrated the surface too far.

Model inflation code looked somewhat like this:

private void CalculateHitbox(float radius)
{
	triangleSetHitbox.triangles = new List<Triangle>();
	triangleSetHitbox.vertices = new List<Vector3>();
	for (int i = 0; i < triangleSet.triangles.Count; i++)
		triangleSetHitbox.triangles.Add(triangleSet.triangles[i].DeepCopy());
	for (int i = 0; i < triangleSet.vertices.Count; i++)
		triangleSetHitbox.vertices.Add(triangleSet.vertices[i]);

	for (int i = 0; i < triangleSet.vertices.Count; i++)
	{
		List<Vector3> normals = new List<Vector3>();
		for (int j = 0; j < triangleSet.triangles.Count; j++)
		{
			if ((triangleSet.vertices[i] == triangleSet.vertices[triangleSet.triangles[j].a]
			  || triangleSet.vertices[i] == triangleSet.vertices[triangleSet.triangles[j].b]
			  || triangleSet.vertices[i] == triangleSet.vertices[triangleSet.triangles[j].c])
			  && !normals.Contains(triangleSet.triangles[j].planeEquation.N))
			{
				normals.Add(triangleSet.triangles[j].planeEquation.N);
			}
		}
		Vector3 avgNormal = Vector3.Zero;
		foreach (Vector3 normal in normals)
			avgNormal += normal;
		avgNormal.Normalize();
		avgNormal *= radius;

		triangleSetHitbox.vertices[i] -= avgNormal;
	}
}

That's some old legacy collision code. Weird to think all of that is now gone, even classes such as Triangle and TriangleSet don't exist anymore in the current engine.

Anyway, we're now nearing the end of the summer of 2018. This is where I really started developing a whole bunch of new features simultaneously, some of these include:

Delta time

Two of those features listed above required me to implement a crucial change across the whole engine; using delta time rather than a fixed timestep. Yes, seriously, the whole engine didn't use delta time. Coming from a GameMaker background I was used to doing everything using a fixed timestep, but this is actually a rather wacky method of implementing game loops.

I had always had a hard time trying to wrap my head around the concept of delta time, not understanding when to multiply values with delta time and when not to. What does delta time even represent? The delta time value is just the seconds, really. For instance, consider declaring a float called timer with value 0.5f and calling timer -= dt; every update cycle, it means the timer variable represents 0.5 seconds and will reach 0 after 0.5 seconds. This is obviously a lot more robust than relying on frame speed and instead setting timer to 30 and subtracting it by 1 every update cycle, provided that the game runs at 60 frames per second. The simple realization that delta time works with seconds snapped me out of the confusion and, just like that, I completely understood how to apply it in games. Stop thinking in frame speeds, start thinking in actual time.

So after rewriting some base functionality, as well as the player and some other game objects like enemies and projectiles, it was done. This was a very important step for the engine's development. Using delta time would allow me to implement slow motion, a feature I really wanted in my engine. It also provides the flexibility of running the game loop at any desired speed, which is required for implementing frame interpolation.

Shadows

Around the same time as the whole delta time rewrite I started working on another big feature: real-time shadowing. This had been playing in my head for quite some time, but I had no idea how to approach the issue. I didn't want to use the shadow mapping method, which seems relatively easy but produces blocky and imperfect shadows. So I went on searching for a nicer method, and eventually, I came across something called shadow volume. This seemed quite complex, but also very elegant because it is purely mathematical and therefore produces entirely accurate shadows. Another advantage of using shadow volumes is that it comes with self-shadowing automatically, whereas shadow mapping does not.

So, basically a shadow volume is the volume of a model mesh which is in shadow. Of course, rendering just shadow volumes wouldn't result in correctly displayed shadows. You need to render them using a stencil buffer; this method is called stencil shadowing. There are three variants of implementing stencil shadows: depth pass, depth fail (more commonly known as "Carmack's Reverse"), and exclusive-or. The exclusive-or method is the simplest, but does not work for multiple light sources. Depth pass is faster than depth fail, but depth fail has the critical advantage where it produces correct shadows when the camera is inside shadow, whereas depth pass essentially produces inverted shadows for this occasion... So I ended up using both depth pass and depth fail methods.

Stencil shadows turned out to be probably one of the most complicated features I've ever implemented. First up was implementing the shadow volumes themselves. I constructed an implementation of the shadow volume by browsing the web and piecing things together, not actually understanding the rather complex math behind it. While it may seem obvious, I think it is important to realize that you don't actually need to understand the math behind such a complex structure in order to use it correctly; you just need to understand how to use it and know what it represents. For instance, many people don't understand how the process of applying the square root of a number works either, yet they're able to use and understand square roots simply because they understand what they represent. Perhaps a more related example is that I have no idea how the math behind quaternions works, but use quaternions all over the place in the engine.

06 Sep 2018
One of the first attempts at implementing shadows. Note that the shadow volumes are being rendered to the stencil buffer in their entirety; none of the three stencil methods listed above are being used here. This is a good start though; it confirms the validity of the math behind the shadow volume.
17 Oct 2018
Another example of shadow volumes being rendered in its entirety

Now, how do we get the shadow to be projected only on the surfaces that are in shadow, and not render the whole volume? The answer is quite simple, but the technical implementation is not. The triangles of the shadow volume are always rendered clockwise or counter-clockwise, exactly the same as a regular 3D model. This means that the triangles on the back of the volume, based on the camera's perspective, are essentially rendered "in reverse" compared to the triangles on the front. The correct way of retrieving an accurate "shadow surface" is by having those triangles cancel each other out on the stencil buffer; when both a forward and a backward triangle are rendered to the same pixel, it obviously means that there is no shadow surface in between. Of course, you would need to enable and disable the depth buffer accordingly in order to get this to work. Essentially, the only case when there is actually a shadow surface, is when there is a model face somewhere inside the shadow volume. Again, using the depth buffer, we can detect this, and set the bit on the stencil buffer representing this particular pixel to 0, which indicates that the pixel should be in shadow for this light source.

Next is actually rendering our scene. First, the entire scene is rendered without the light sources. Then, the scene is rendered again, with all light sources enabled, per light source, using additive blending, and only using the stencil buffer for that specific light source... You probably want to read that sentence a few times. The stencil buffers use 0 for pixels in shadow, and 1 for pixels in light, for their corresponding light source. When this rendering iteration is applied, the pixels with bit 0 on the stencil buffer are skipped, as they would be in shadow for the light source, so only the lit parts of the scene are rendered on top of the previously rendered scene. This produces accurate shadows with varying intensity, because when multiple light ranges overlap, multiple shadows would occur for a model's mesh that's within the reach of those light sources. The shadows produced by this occasion would then be less intensive as the other light sources will project onto them; the less light sources projecting onto a surface, the darker the surface will be.

15 Feb 2019
The first proper implementation of stencil shadows

I don't do all of this in the engine yet; I just render the scene twice, one time without lights, and one with lights, as this works fine until you use multiple overlapping light sources. You won't have the varying shadow intensity then. Rendering the scene per light source can also be intensive for the system, but it is definitely possible to optimize it and make the rendering process faster, unless you use a ridiculous amount of overlapping light sources, then you're probably better off ray-casting the whole thing.

A single light works well.
Multiple overlapping lights do not work perfectly; you would expect the shadows to be less intensive, and see a comparatively darker small strip of shadow where the two shadows intersect.

One last thing about shadows... A rather interesting thing to point out is that this method of rendering essentially creates an optical illusion of some sort from the camera's perspective; it is not the model surface that has shadow geometry applied to it, it is actually a part of the shadow volume itself.

Consider this simple scene.
The shadow volume for the projected shadow looks like this.
Let's zoom in a bit... This is the same scene from a different perspective.
If we pretend the camera is still in its old location as in the images above, then this is the "optical illusion" that's actually being rendered. (Note that this image may not be entirely accurate as I just edited it a little in order to reproduce the illusion, but you get the point.)

Understanding this phenomenon was essential to me understanding how shadow volumes are actually being used for this purpose.

I currently have only a few features left to implement, so I'm hoping that I can perfect these shadows one day.

Frame interpolation

As mentioned earlier, I was using delta time correctly this time. This meant I was free to run my update (physics) loop at any desired speed without breaking the game, but what if I wanted to have my game render at higher speeds for monitors with high refresh rates, without having to call the update loop more often, sacrificing physics consistencies and performance? The answer to this question turned out to be frame interpolation. This basically means your update loop runs at a fixed rate, in my case 60 times per second. Your render loop then runs more often, depending on how much the system can handle. My render loop runs up to 300 times a second, 5 times more than the update loop.

At first glance, calling the render loop more often than the update loop makes no sense; you would think the same frame would be rendered multiple times, but this is not the case using frame interpolation. What you do is you essentially keep track of both the previous and current state of all values such as translations and rotations, and then, inside the render logic, interpolate between the previous and the current values, based on the "sub frame" value. The sub frame is a value between 0 and 1 representing the time between the previous and the current frame. The main loop of the engine takes care of this by measuring the time between update and render loop calls.

I must credit this excellent article called Fix Your Timestep! for introducing me to this concept. I came up with this way to interpolate my render states automatically:

public abstract class AbstractInterpolationState<TState>
	where TState : struct, IEquatable<TState>
{
	/// <summary>
	/// Indicates whether this state differs from its previous value.
	/// </summary>
	public bool IsChanged => !Physics.Equals(PhysicsPrevious);

	public TState Start { get; set; }
	public TState PhysicsPrevious { get; set; }
	public TState Physics { get; set; }
	public TState Render { get; set; }

	protected AbstractInterpolationState(TState start)
	{
		Start = start;
		PhysicsPrevious = start;
		Physics = start;
		Render = start;
	}

	public void PreUpdate() => PhysicsPrevious = Physics;

	public abstract void PrepareRender();
}

Here we have a generic abstract class providing base functionality for a struct that can be interpolated, such as a floating point number, a vector, or a quaternion.

public class Vector3State : AbstractInterpolationState<Vector3>
{
	public Vector3State(Vector3 start)
		: base(start)
	{
	}

	public override void PrepareRender() => Render = Vector3.Lerp(PhysicsPrevious, Physics, Game.SubFrame);
}

Vector3 is probably the most common struct you would be interpolating, as it is used to represent 3D positions and scaling.

Ideally though, you would want to implement PrepareRender() directly in AbstractInterpolationState<TState>, like so:

public void PrepareRender() => Render = PhysicsPrevious.Lerp(Physics, Game.SubFrame);

And then simply implement an ILerpable<T> interface and adding it as a type constraint to TState in AbstractInterpolationState<TState>:

public interface ILerpable<T>
	where T : struct
{
	public T Lerp(T target, float amount);
}

This would remove the need to make the base class abstract and derive from it. Unfortunately though, none of the types like float and the vectors types implement such an interface, and it's not worth it to create custom types for this purpose only.

I currently use this system for the following types:

Here's how you would use it inside a game object class. You would only need to assign to PositionState.Physics to change the object's position; the rest is then done automatically.

public abstract class TranslatableGameObject : GameObject
{
	public Vector3State PositionState { get; set; }

	protected TranslatableGameObject(Vector3 positionStart)
	{
		PositionState = new Vector3State(positionStart);
	}

	public override void PreUpdate()
	{
		base.PreUpdate();

		PositionState.PreUpdate();
	}

	public override void PrepareRender()
	{
		base.PrepareRender();

		PositionState.PrepareRender();
	}
}

Now the game will look much smoother on monitors with high refresh rates, provided that you implement this for every visible change that happens on the screen. I use it for translations, rotations, scaling, colors, shaders, particles, lighting, and UI elements.

Transformable effect emitters

This is a simple system I came up with myself. Transformable effect emitters are invisible game objects in a 3D scene that apply effects to the game based on the camera's position. The emitters can have different shapes; they can be spheres, cylinders, or cuboids. The emitter has an intensity value ranging between 0 and 1, which can be applied to the effect in full when the camera is in range, or linearly based on the distance between the emitter and the camera. Perhaps in the future this could be expanded to include exponential interpolation as well, and probably more different shapes too. The system is currently used for applying post-processing shaders, screenshake, and game time modifiers (slow motion). They can be attached to game objects, but can also be placed individually in the level editor.

19 Jan 2020
Demonstration of spherical effect emitters attached to arrows shot by an enemy. The emitters apply slow motion and additional post-processing shading (red screen), and the effect intensity is based on their distance from the camera.

Writing a dissolve shader

20 Apr 2019
A dissolve shader applied to a spawning enemy

This was surprisingly easy. In order to achieve this effect, you basically subtract a randomized cloud texture from the original texture and clip the pixels when they're below a certain value. This can be done using a single shader parameter (DissolveAmount in the example) ranging from 0 to 1 which indicates the dissolving intensity (e.g. 0 is no dissolve (fully visible), and 1 is fully dissolved). You can also modify the color of the pixels when they're in between a certain range to apply an "edge effect". Here's my implementation:

float GetDissolveValue(float2 textureCoordinate)
{
	float dissolveValue = tex2D(dissolveTextureSampler, textureCoordinate).r;
	clip(dissolveValue - DissolveAmount);

	return dissolveValue;
}

float4 ApplyEdgeEffect(float dissolveValue)
{
	if (DissolveAmount == 0)
		return float4(0, 0, 0, 0);
	return step(dissolveValue - DissolveAmount, 0.05f) * DissolveEdgeColor;
}

And here's a simplified example of how you would apply it to a pixel shader:

float4 PixelShader(VertexShaderOutput input) : COLOR0
{
	float dissolveValue = GetDissolveValue(input.TextureCoordinate);

	return tex2D(modelTextureSampler, input.TextureCoordinate) + ApplyEdgeEffect(dissolveValue);
}

And here the randomized texture used for dissolving:

08 Jul 2019
Another demonstration of the dissolve model shader

Expanding the engine using external libraries

Eventually I realized I couldn't possibly implement every feature in my engine myself, so I introduced two external libraries to the engine: irrKlang and BEPUphysics.

irrKlang

When the game slows down, all the audio slows down with it, playing at a slower speed. This is a really cool effect in my opinion. The original audio system in MonoGame supports this, but it could only play audio at speeds ranging between 0.5 and 2. It also had some minor problems that caused audible glitches. Anyway, I went to look for an external audio library, and I found irrKlang. irrKlang does support slowing down audio pseudo-infinitely, and is pleasant to work with. It contains everything you need in a 3D game, and even comes with special effects that you can apply to audio real-time, which is a very cool feature.

BEPUphysics

Because my own legacy collision detection and handling was getting too unperformant, buggy, and limited, I decided to integrate BEPUphysics into the engine. BEPUphysics is a 3D physics engine, it is really fast and comes with a lot of cool features that I could never implement myself. I had all my physics handling code converted to make use of BEPUphysics by April 2019.

19 Jan 2020
The first implementation of BEPUphysics happened in March 2019 and looked somewhat like this re-recording.

No more rewriting

...At least not for the entire engine. I'm still regularly rewriting specific components of the engine. I think code quality and consistencies are very important when it comes to large projects and I regularly maintain older code in order to keep everything together nicely.

So I now had a solid engine that doesn't end in the previous two scenarios where I had to rewrite it entirely. The main thing I learned from this is: Setting up generic and reusable systems is good, but don't overdo it. Keep it maintainable and usable, and create a clear boundary between an abstraction layer and a more functional implementation.

Other features I implemented around this time include a completely rewritten level editor, a UI system, a particle system supporting billboard sprites and regular colored triangles, and an improved render engine with new shading and various post-processing effects.

Using multiple engines

I've thought about using MonoGame for 2D games as well. And for quite a while, I had already realized I probably wasn't going to use GameMaker for any large projects anymore. I still do love GameMaker though, and I probably will still use it for small games such as games made specifically for game jams. But knowing the advantages of C#, working with GameMaker and GameMaker Language (GML) just isn't fun for me when I want to create something maintainable and reusable. This is mainly for the following reasons:

Okay, that turned out to be quite a long list.

I did want to remain creating 2D games, however, so in mid 2019 I had the following idea. I split the project in two again; an engine and a game. Then I split the engine itself in three parts:

On top of that, I created two test games using the 2D engine, so now I had six projects. The 3D engine is still the largest and most prominent project though.

The end?

Probably not, but this is where I am at right now. The next steps on my list mainly consist of rewriting certain components in the engines, such as the UI, the level editor, and the way the 3D engine communicates with BEPUphysics. I'll probably rewrite the physics part entirely as BEPUphysics v2 recently came out and I really want to give that a try. I also want to fix the shadows to support multiple overlapping lights. After all of that is done, perhaps I should make a game with it... Once the engine is somewhat finished.

However, this article is getting quite long, so I'll probably create separate articles for any upcoming development progressions.

Here are some more videos:

14 Feb 2019
Particle effects in slow motion
01 Apr 2019
Particles and post-processing shading
18 Apr 2019
Enemies
18 Apr 2019
Post-processing shaders
18 Apr 2019
Level editor
08 Oct 2019
Some level