Wednesday, 18 February 2015

Crush! Breakdown Part 2 - Collision World

In part one of the Crush! breakdown I outlined the scene graph used to transform and render game entities, as well as some of the controller classes used to manipulate nodes within the scene. Entities are created by attaching a combination of components to a node, one of which is the collision component - used to make the nodes react to external forces and resolve collisions between the bodies. My instinct for this kind of thing is usually to reach for a library such as chipmunk or box2d, but experience has taught me that this is often overkill for a small project, and will probably not yield great results if the game isn't inherently physics based (such as Angry Birds for example). On top of this I'm always willing to investigate new areas of programming, so giving the subject of physics and collision handling some serious study was an idea which appealed to me. To prevent presenting myself with a daunting scope I tried to reduce the amount of complexity as much as possible by looking carefully at what would actually be needed. After some cogitation I decided that all I needed were rectangular bodies with no rotation which reduced the scope dramatically when considering what would be needed for collision detection. Normally I would give body properties mass so that each body's acceleration could be calculated by the current force acting upon it, using Newton's second law of motion: force = mass * acceleration. Even this could be reduced so that a collision body only needed a velocity vector, a position, and a bounding box.

struct CollisionBody
{
    vector2 m_velocity;
    vector2 m_position;
    floatRect m_boundingBox;
}

Each frame the body's position is updated by adding the current velocity to it. The velocity is adjusted by either applying an external force (by adding another vector to the velocity), or as a result of a collision. Collision bodies exist within a CollisionWorld class, which is responsible for creating bodies, detecting collisions between them, and applying any resulting forces. The CollisionWorld class also acts as one of the controller classes, and so the instance lives alongside the other controllers in the GameState class. Apart from the factory functions of the CollisionWorld class, the beef of the code exists inside:

CollisionWorld::update(float dt)
{
    //test which bodies intersect and mark as collision pair
    m_collisions.clear();
    for(const auto& bodyA : m_bodies)
    {
         for(const auto& bodyB : m_bodies)
        {
            if(bodyA.get() != bodyB.get())
            {
                 if(bodyA->m_boundingBox.intersects(bodyB->m_boundingBox))
                {
                    m_collisions.insert(std::minmax(bodyA.get(), bodyB.get()));
                }
            }
        }

    }

    //for each collision pair calculate manifold and resolve collision
    for(const auto& pair : m_collisions)
    {
        auto manifold = getManifold(pair);
        pair.second->resolve(manifold);
        manifold.z -= manifold.z;
        pair.first->resolve(manifold);
    }

    //apply gravity to each body and perform a physics step
    for(auto& body : m_bodies)
    {
        body->applyGravity();
        body->step(dt);
    }
}

The update function is performed in to three main steps. First each body is tested against the others for intersection using its bounding box. If there is an intersection the pair of bodies are inserted into a std::set using std::minmax() which makes sure that each pair is inserted only once. Potentially this step could be optimised with some kind of spatial partitioning to make sure bodies are only tested against other nearby bodies, but for a small game it wasn't needed, and I omitted it for the sake of simplicity.
    The second step is to calculate the collision manifold of each intersecting pair. The manifold contains a normalised vector perpendicular to the intersected surface, and a value stating the depth of intersection. This is usually the minimum it takes to resolve a collision between two objects - and in rectangular-only collision is vastly simplified by the fact that there can only be one of four possible normal vectors for each side of the rectangle, reducing the complexity of manifold calculation dramatically. In my calculation function I took advantage of the fact SFML's rectangle class returns the intersection area as a new rectangle, and you can see the full implementation here. Handily I could fit the two component normal vector along with the penetration depth into a single sf::Vector3, which made it an easy value to pass around. If you're interested in manifold generation for more complex interactions, there is an interesting article here which I found worth reading. The last step of the CollisionWorld update function applies a pre-defined gravity force to each body (the gravity value is passed to the CollisionWorld constructor), and executes each body's step() function.

Each body has two important functions, resolve() and step(). The resolve() function is used to decide how the body should react to the collision manifold data. As each body type needs to react slightly differently this is where behaviour customisation is applied. Each body has a currently active state defining its behaviour at that point in time. I took this idea from the state pattern (again from Game Programming Patterns), and created a BodyBehaviour class from which body type specialisations are inherited. This allows collisions to resolve themselves in specific ways, such as water being absorbent, or the ground being solid - as well as giving bodies the opportunity to raise body specific events. When a player body is destroyed then a PlayerDied event can be raised and so on. Physics values such as gravity and friction can also be intercepted by the active behaviour and modified if necessary, velocity vectors reflected about the manifold normal data or penetration values negated.
    The body step() function then applies any changes made by the resolve() function by integrating the current time step with the body's velocity, and then moving the body. It also does some simple bounds checking and moves any bodies which may have tunneled out of the play area to a reasonable place at the top of the world. If the body is attached to a node in the scene graph then that node's position is then updated in the scene. Here is a video of the initial physics setup:


The red blocks are solid bodies, green are enemies, and blue is the player.

This is a very simplified overview of the collision system in Crush!, admittedly, but it's very difficult to go very far into detail in a single post. Crush! is open source, however, and the full collision code can be seen here.


Tuesday, 17 February 2015

Crush! Breakdown Part 1 - Scene setup

After spending any amount of time on a project I like to look back over what I did and break it down in a kind of postmortem. I find some retrospect helps me to consider what it is I've done wrong so that I can learn from the experience before moving on. The starting point for this particular project was creating a scene and getting it rendering in a modular and extensible way. I'm actually pretty pleased with what I did, although I do wish I had considered networking support early on in design, as it really needs to be baked in from the start (and hence will probably never be added to Crush!).
    I started with the concept of a scene graph - a series of parented nodes which allow draw calls and transformations to be passed across siblings and down through children. Scene graphs are well documented and there are resources all over the internet for learning about them, so I'll not go too far into detail here. There is even an example in the SFML book, which I used as a starting point as I was using SFML as the main library. The scene graph in Crush! varies from the book's implementation, however, in that instead of relying on inheritance for distinguishing node types it tries to take a component based approach. The scene graph exists to maintain the transformations of each scene node, and allows rendering of the scene in as compact way as possible. The actual drawing of node representations and the behaviour of nodes (including how they are transformed in the scene) is left to those components, so each node can behave independently, depending on the collection of components which are attached to it. The entire node graph is kept inside a class which represents a scene. This class is essentially the root node of the graph, with a few key differences to implement ownership semantics. As well as the graph the Scene class owns any lights which may exist, as well as cameras used for rendering the scene. This means the Scene class can properly set up any views and perform lighting calculations each frame, before moving down the graph and rendering any drawables attached to the nodes. Internally the Scene's draw function looks a bit like this (in pseudo code):

Scene::draw(RenderTarget rt, States states)
{
    states.shader = m_lightingShader;
    for(const auto& l : m_lights)
    {
        m_lightingShader.setParameter(lightParam, l.property);
    }

    rt.setView(m_activeCamera.getView());
    m_sceneGraph.draw(rt, states);
}

I'll cover the actual lighting and camera set up in another post. For now it's enough to know that the scene is responsible for the lights and cameras which have been created (and attached to nodes as part of the component strategy) which it then uses to update the shader system each frame, before drawing the scene graph.This also nicely encapsulates much of the rendering, so that externally the entire scene can be drawn at once. The overall structure of the game uses a state stack, and the current scene is a member of 'GameState' (other states being PauseState, MenuState and so on), so when the GameState is created a new scene is built from information loaded from a map file, and drawn with

m_scene.draw(m_renderWindow, states);

With little else to do to the scene the code in GameState can be kept relatively clean and easy to read, with all the implementation details tucked away inside the Scene class.

Using a component based approach with the scene nodes also helps with this, as each component can belong to a parent class where its details are relevant, without cluttering up the Node class itself. I already stated that cameras and lights are components, the Camera class being not much more than a wrapper around sf::View, and Light a small struct which contains colour and falloff values. These can be attached to nodes so that they take on any transformations as the node moves around the scene, without having to keep their own transform specific data. Lights and cameras both only exist within the scene so it makes sense that the Scene class should own all the instances, and pass out references to them should they need to be modified. For example:

auto& light = m_scene.createLight(colour, falloff);
Node::Ptr node;
node.attachLight(light);
//do other stuff to node
m_scene.addNode(node);

The same goes for cameras. Internally the Scene class creates the light (or camera), providing that certain criteria are met, and returns a reference to it so that it can be modified if necessary, and then attached to a node. Node::Ptr is a typedef for std::unique_ptr<Node> as nodes will usually need to be dynamically added and removed from the scene as game play progresses. Adding the node to the scene allows the Scene class (and, internally, the scene graph) to take ownership of the node, which is why I use a unique_ptr as opposed to a shared_ptr.
    The other main components used in the game are drawables - classes which inherit sf::Drawable, and collision bodies. I used sf::Drawable as the class type for rendering nodes rather than sf::Sprite, as most of the drawables I used were custom classes, such as the water effect or animated sprites. Collision bodies represent a very basic physics engine which only supports rectangular collision detection with no rotation. For this game it was enough, and using a full blown system such as Box2D seemed overkill. It also meant that I didn't have to worry about unit conversion and could keep all values within a single domain. Collision bodies all belong to a 'CollisionWorld' class which takes care of all the physics simulation, collision detection and collision resolution. Bodies can be requested from the CollisionWorld in a similar way to how lights/cameras are requested from the Scene class. The returned bodies remain owned by the collision world, and references are attached to the nodes as needed, so that the nodes can be transformed and updated as bodies move around the world and interact with each other. I'll go into full detail of the CollisionWorld class in another post.

This was enough to create a renderable scene, and make the scene nodes interactive whilst remaining reasonably decoupled from the other classes. It sat in my mind as having a set of playing pieces laid out on a game board, ready for the player to command. To be able to manipulate these playing pieces without directly hooking up too much code, I turned to the observer pattern, as described in Game Programming Patterns by Robert Nystrom (although he's far from the first to write about it of course). Here is, perhaps, where I would make a slight change in hindsight. While the pattern worked very well it did become a little spaghetti-like in places, and it isn't always obvious which classes are observing which - in future I would perhaps replace this pattern with a message bus. In this instance, though, I followed through with the idea of the scene nodes being 'observable', watched by a set of 'controller' classes with the ability to manipulate the playing pieces, each responsible for their own part of the game. Controller classes include the player, the scoreboard, the audio system, the physics world and a controller responsible for map data loaded from an external file. This also provided handy encapsulation for drawable objects, such as the player controller looking after an animated sprite and making sure the correct animations are played, or the map controller being responsible for creating the world geometry which makes up the scenery. Each of these controllers can then provide a reference to drawable items which are attached to the corresponding nodes in the scene graph. The scene graph needs to know nothing of the internal implementation of these drawables, only how to draw them. The controllers need to know little if nothing of each other either, they just watch the scene and wait for events to be raised via the observer pattern. If an event is pertinent to a particular controller, then that controller will act on it. For example if a PlayerDied event is raised then the audio controller will play a specific sound, the scoreboard controller will reduce the number of lives and points, and the player controller will reset the player's position.
    Finally, to enable the controller classes to manipulate the scene, I used a command queue which is more or less identical to that found in the SFML Game Development book. Each controller class keeps a reference to the command stack, so that when it needs to update the scene it can create a command targeted at a specific node or set of nodes, and place it on the stack. At the beginning of each frame the entire command stack is executed so that the scene is updated. After which the collision / physics world is updated and collisions resolved, the controllers respond to any events raised by the updated state, before the entire scene is then drawn. The flow within the GameState class then looks like this:

while(!m_commandStack.empty())
    m_scene.doCommand(m_commandStack.pop());

m_collisionWorld.update(dt);

m_player.update(dt);

m_scoreboard.update(dt);

m_audio.update(dt);

m_scene.draw(m_renderWindow, states);


Crush! is open source, so if you want to take a look at the final implementation, or just have a play you can get it from the Github page.

Part 2: Collision

Monday, 16 February 2015

Crush! A 2D platformer made in SFML

Yet again many months seem to slip by while this blog goes neglected.. although this time for good reason! Pretty much ever since my last post (which has since been integrated into the official Gameplay samples :D) I have been hard at work on a two player competitive platform game named Crush. The aim of the game is to crush all the bad guys by dropping or sliding heavy crates into them and, in two player, crush more than your opponent whilst vying for precious time with the Magic Hat! The longer a player wears the hat the more points they are awarded at the end of the round. Here's a (slightly outdated) video of it in action:



The game itself is far from complete, but I've decided to release it open source in its current state to get some wider opinion on it. Currently the source is available from Github, although there are no binaries yet. The windows version (assuming you choose to use the included Visual Studio project) also includes source written in C# / .net 4 for a level editor and sprite sheet animation data editor. I've made a short video which briefly covers how they work:



In the vein of my previous project Space Racers I plan to write a few blog entries about the code design and how the mechanics of the game work, as well as perhaps reflecting on what I haven't got right, and how I'll address that next time round. As usual all feedback is welcomed, there's a thread on the SFML forums here.

The first part of the breakdown is now available here.

Friday, 12 September 2014

Water in OpenGL and GLES 2.0: Part 4 - Blending it all together

If you've been following the previous three parts of this article then by now you must be itching to see how the fruits of your labour are going to look, so let's dive right in. To get a hint of the final outcome we can modify the watersample.frag file so that gl_FragColor is a straight blend of the reflection and refraction images:

gl_FragColour = mix(refractionColour, reflectionColour, 0.5);

This performs a 50/50 mix of the two images, with a final result which looks like a slightly odd frozen lake.



This is nice, but we can do better! For one thing the amount of blending each fragment receives should vary based on the perceived angle of the camera's eye position relative to any given point on the water plane. That is, the more directly we look at the water, the more transparent, and the more of the refraction image should be shown, and, conversely, the shallower the angle of observation the more reflective the surface should be. This is done by approximating the Fresnel term, a floating point number calculated for any given fragment, which replaces the constant 0.5 value in the mix() function. There are a variety of methods of doing this, all of which (as far as I can tell) require a normal vector representing the water's surface normal at any given point - so that we can measure the angle between the camera's viewpoint and the fragment by taking the dot product of the eye position with the normal vector. To start with we could use a single up facing vector which represents the entirety of the plane, but here is a good opportunity to add some extra detail to the water's surface.
    Using a normal map we can store a whole range of normal vectors, mapped across the surface of the plane, each representing a slightly different angle producing a perturbation of the surface. As an added bonus the red  and green channels of the normal map can be used to create a slight distortion in both the reflection and refraction images, adding another level of detail.
    To map the normal texture to the water plane we need to do some modifications to the watersample shaders. First we need to add the texture coordinate attribute a_texCoord to the vertex shader, which is automatically passed in by Gameplay. Then in the main() function pass the value directly to a new varying variable v_texCoord so that it is available in the fragment shader. As well as adding the new v_texCoord to the fragment shader, we also need to add a sampler uniform u_normalMap so that we can pass in the normal texture. To bind the actual texture to the uniform we don't actually need to do anything in the project's code. Gameplay provides a nice auto-binding mechanism, allowing us to pass the texture in simply by editing the watersample.material file. Add

sampler u_normalMap
{
        mipmap = true
        wrapS = REPEAT
        wrapT = REPEAT
        minFilter = LINEAR_MIPMAP_LINEAR
        magFilter = LINEAR

        path = res/images/water_normal.png

}

to the material water definition, or look at the article source code for part four. Assuming the path points to a valid image file the texture will automatically be loaded and bound to the shader when the program starts. Once this is all set up we can return to the fragment shader, and start using the normal data stored in the texture.
    Immediately in the main() function we sample the normal map, and convert it to normalised values:

vec4 normal = texture2D(u_normalMap, v_texCoord * textureRepeat);
normal = normalize(normal * 2.0 - 1.0);

textureRepeat is a constant value which allows tiling of the texture to better fit the water plane. Set it to 2.0 to make the texture repeat twice in both the S and T direction, 12.5 to make it repeat 12.5 times and so on. Before we start calculating any reflection and blend parameters, let's add some distortion to the output.

//distortion offset
vec4 dudv = normal * distortAmount;
    

//refraction sample
vec2 textureCoord = fromClipSpace(v_vertexRefractionPosition) + dudv.rg;
textureCoord = clamp(textureCoord, 0.001, 0.999);

distortAmount reduces the amount of distortion added, as too much can easily ruin the effect, and is typically a small number such as 0.05. The red and green values of dudv are then added to the texture coordinates, offsetting them slightly, before clamping the coordinates within a reasonable range. The refraction texture is the sampled in the normal way with the newly offset coordinates, and the process repeated for the reflection texture. The output should now be a nice wavy distorted image (assuming you're using the normal map texture supplied with the article source. You can use any normal map texture you like).



After the reflection and refraction textures have been sampled, we are now ready to approximate the fresnel value, and use it to blend the textures together. To do this we need the eye position relative to the current vertex, so we can take at dot product of it with the current normal value. The watersample vertex shader needs two new uniform variables

uniform mat4 u_worldMatrix;
uniform vec3 u_cameraPosition;

and a new varying

varying vec3 v_eyePosition;

so that the calculated position can be passed along to the fragment shader. Gameplay provides the worldMatrix and cameraPosition values for us as standard, and we can auto bind these in the material file the same way as we did the normal map, which saves having to modify the project code:

u_worldMatrix = WORLD_MATRIX
u_cameraPosition = CAMERA_WORLD_POSITION

Then, in the main() function of the vertex shader, we can calculate the eye position

v_eyePosition = u_cameraPosition - (u_worldMatrix * a_position).xyz;

With the eye position available in the fragment shader we can begin to use it to calculate the fresnel value. Before we can use it, however, the eye position needs to be converted to the tangent space coordinates used by the normal map (or we could just use an object space normal texture - but that would upset the distortion factor). Due to the fact the water plane is fixed horizontally we can use a set of constant vectors to represent the plane's normal, tangent and bitangent vectors (if the plane was oriented in any other way we'd probably have to pass these values in either as an attribute or a uniform value), and use them to move the eye position into tangent space

const vec4 tangent = vec4(1.0, 0.0, 0.0, 0.0);
const vec4 viewNormal = vec4(0.0, 1.0, 0.0, 0.0);
const vec4 bitangent = vec4(0.0, 0.0, 1.0, 0.0);


vec4 viewDir = normalize(vec4(v_eyePosition, 1.0));
vec4 viewTanSpace = normalize(vec4(dot(viewDir, tangent), dot(viewDir, bitangent), dot(viewDir, viewNormal), 1.0));

then create a reflected vector of the view and dot it with the normal to get our approximated fresnel term

vec4 viewReflection = normalize(reflect(-1.0 * viewTanSpace, normal));
float fresnel = dot(normal, viewReflection);

we now have our value to feed into the mix function:

gl_FragColor = mix(reflectionColour, refractionColour, fresnel);

Load up the scene and you should see the water really beginning to take shape. Moving around the scene you'll notice the blending of the reflection and refraction map change to match your view. One thing is still not right though, and that is the fact that the water is still apparently frozen. We can change this with a simple new uniform in the fragment shader

uniform float u_time;

This is simply going to be a floating point value which increases over time. In the article's source folder there is a small utility class called Timer, which abstracts the Gameplay clock, although you can use getGameTime() directly if you prefer. Create a private const function to return its value, preferably divided by some amount (else the animation will run waaay too fast), and use it to bind the elapsed time to the new shader uniform. In the fragment shader add the time to the coordinates of the normal map look up.

vec4 normal = texture2D(u_normalMap, v_texCoord * textureRepeat + u_time);

This will have the effect of offsetting the normal map texture, scrolling it across the surface of the plane, and creating a simple yet pleasing animation. If you get odd stretched lines across the surface make sure to check that the sampler settings in your water material have wrapS and wrapT set to repeat.

That pretty much sums up what I set out to describe in this article, but there is plenty more which could be added to improve the effect. For instance no lighting is taken into account in the fragment shader, which, once added, could also be used in conjunction with the normal map to calculate specular highlights on the surface of the water. The water also looks very clean too. It is entirely possible to calculate the depth of the water and blend it with a colour so that it appears darker and murkier the the deeper you go.



Here's a short video of the final version of the project, and the water effect running on my Moto G with Android 4.4.2


References:
Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Previous Parts:
Part One
Part Two
Part Three


Thursday, 11 September 2014

Water in OpenGL and GLES 2.0: Part3 - Reflection

Continuing from the previous part of this article on creating a water effect in Gameplay3D, in this part we'll cover creating reflections on the surface of the water. It is important that you have read and completed part two, and that you have the refraction buffer drawing, previewed, and projected on to the water plane. This is because before we can continue we need to replicate the refraction buffer with a new member *m_reflectionBuffer, as well a new sprite batch *m_reflectBatch to draw the preview. Add these to the project, initialise them in the initialise() function, release and delete them in the finalise() function, and update the render() function so that the scene is drawn to the new reflection buffer, and the reflection buffer preview is drawn next to the preview window of the refraction buffer - all in the same way as the refraction buffer.
    Once you have the scene set up we can start to modify the process, so that instead of getting a duplicate of the refraction buffer, we actually get a reflection. Firstly modify the clip plane settings in the render function right before drawing the reflection buffer:

m_clipPlane.y = 1.f;
m_clipPlane.w = -m_waterHeight;

By inverting the normal direction and the plane height the plane now faces the opposite direction. When you compile and load the scene you should see in the preview window that the grass is kept, and that the bottom of the pond is clipped away instead. This is because we want to reflect the scene as it appears above the water. Next we need to consider how to invert the image vertically, as a reflection would appear in the water. A reflection isn't just the image as seen from the camera, only upside down, however. What we see is, in fact, what would be seen by a camera below the water plane, targeted at the same point as the scene's camera:


If the scene's main camera is camera A, then the reflection it sees is the same as if the scene were viewed from camera B. If you've been reading the reference articles linked at the bottom of these posts, you'll see each one offers its own implementation of this camera set up. If we were using raw OpenGL the preferable way would be to use a reflection matrix but, as this article is based around the Gameplay framework, the option is not particularly viable. An alternative would be to scale the entire scene in the Y axis by -1 during the reflection pass, which is possible, but has the drawback of not easily being able to store the WorldViewProjection matrix (more on this shortly). Finally we could create a second camera in place of camera B on the diagram, by taking the forward and right vectors of the scene camera, computing the cross product of the two vectors to find the up vector, and using them to compute a new LookAt matrix each frame, to orient camera B in the right direction. The latter seems a little heavy to do each frame so I settled on (perhaps controversially) creating a second camera in the scene, and having it follow the movements of the main camera, only mirrored about the water plane. In the initialise() function directly after creating the scene camera:

//add a second camera do draw the reflections
m_reflectCamNode = gp::Node::create("reflectCamNode");
m_reflectCamNode->setTranslation(camStartPosition.x, -camStartPosition.y, camStartPosition.z);
 

camPitchNode = gp::Node::create();
gp::Matrix::createLookAt(m_reflectCamNode->getTranslation(), gp::Vector3::zero(), gp::Vector3::unitY(), &m);
camPitchNode->rotate(m);
m_reflectCamNode->addChild(camPitchNode);
 

camera = gp::Camera::createPerspective(45.f, gp::Game::getInstance()->getAspectRatio(), 0.1f, 150.f);
camPitchNode->setCamera(camera);
SAFE_RELEASE(camera);
SAFE_RELEASE(camPitchNode);

This is pretty much a duplicate of the scene camera creation code although, crucially, the Y component of the start point vector is negated, so the the initial LookAt matrix is a reflection of that of the scene camera. Next we need to modify the mouse move event, so that the new camera's pitch movement is inverse to that of the scene's main camera, while the yaw remains the same.

m_reflectCamNode->rotateY(xMovement);
m_reflectCamNode->getFirstChild()->rotateX(-yMovement);

And, of course, we need to make sure that it follows the translation of the main camera in the update() function

auto position = m_cameraNode->getTranslation();
position.y = -position.y + m_waterHeight * 2.f;
m_reflectCamNode->setTranslation(position);

while making sure the Y position is reflected about the water plane by negating it, and adding the plane height multiplied by two. Now when drawing the scene to the reflection buffer we can switch cameras by making a copy of the active scene camera, making the reflection camera the new active scene camera, rendering the reflected scene and then restoring the original camera before drawing the final pass.
   The preview window for the reflection buffer displays the edges of the pond, as seen from below from the view of the reflection camera, and is ready to be projected onto the water plane. In part two of the article the last step was to project the refraction buffer on to the plane via the water shader. We need to do the same thing again, only this time we are using a different camera, so we need to use the corresponding WorldViewProjection matrix to generate the texture coordinates. While the reflection camera is active we can store the plane's WorldViewProjection matrix in a member variable

m_worldViewProjectionReflection = m_scene->findNode("Water")->getWorldViewProjectionMatrix();

This is important that we do this here because *the matrix is only valid while the reflection camera is active*, and is why we store it in a member variable. Adding a private function which returns a const reference to m_worldViewProjectionReflection will then allow us to bind it to the water shader in the same way as the other shader-bound variables which, hopefully, you should now be familiar with. All that's left to do, then, is modify watersample.vert and watersample.frag with uniforms for the new projection matrix and the reflection buffer sampler, in the same way in which we added the refraction buffer previously.

In the vertex shader:

uniform mat4 u_worldViewProjectionReflectionMatrix;
varying vec4 v_vertexReflectionPosition;

and

v_vertexReflectionPosition = u_worldViewProjectionReflectionMatrix * a_position;

and in the fragment shader we sample the reflection texture with the new coordinates

textureCoord = fromClipSpace(v_vertexReflectionPosition);    
vec4 reflectionColour = texture2D(u_reflectionTexture, textureCoord);

To see the result we can assign reflectionColour directly to gl_FragColor. Notice how, because we projected the texture as if it were from the reflection camera, the image is automatically flipped! You should have something which looks like a flat, glossy mirror, albeit with some slight artifacting due to the lower resolution render buffer.


Now we are most of the way there. The only things left to do are to blend the reflection and refraction maps in the watersample fragment shader, and add some animated waves to make the scene look a bit more natural. I will cover that in the next, and final, part of this article.

Part Four

References:
Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Previous Parts:
Part One
Part Two

Wednesday, 10 September 2014

Water in OpenGL and GLES 2.0: Part 2 - Refraction

In the first part of this article I outlined the technique for creating a water effect in OpenGL / GLES which is cheap enough to run on a range of mobile devices. This part of the article looks at the first step toward implementing the effect: rendering the refraction texture.
    This requires everything in the scene below the water level to be rendered to a frame buffer - an off screen render target - whose texture can then be used to feed a sampler uniform in the water material's fragment shader. The fragment shader can then project this texture on to the plane in the scene, while also blending and distorting it to create the illusion of water. I'll assume that you have read the first part of the article, and have the example scene set up in your editor of choice, and that you also have the accompanying source code from github.

First let's set up a render buffer to draw the refraction data to, and a sprite batch so that we can preview the buffer's contents on screen. Gameplay provides a FrameBuffer class, and a SpriteBatch class, which we'll use for the task. Add two member variables *m_refractBuffer and *m_refractBatch, and then initialise them in the initialise() function:

m_refractBuffer = gp::FrameBuffer::create("refractBuffer", bufferSize, bufferSize);
    

auto refractDepthTarget = gp::DepthStencilTarget::create("refractDepth", gp::DepthStencilTarget::DEPTH, bufferSize, bufferSize);
m_refractBuffer->setDepthStencilTarget(refractDepthTarget);
SAFE_RELEASE(refractDepthTarget);

m_refractBatch = gp::SpriteBatch::create(m_refractBuffer->getRenderTarget()->getTexture());

Don't forget to release the frame buffer with SAFE_RELEASE in the finalise()function, as well as delete the sprite batch with SAFE_DELETE. Notice how we can use the texture member of the frame buffer to create the sprite batch, which is useful for previewing the buffer's contents. If you've been studying the article's source code, you'll have noticed that the frame buffer's size is not in fact the same as that of the main window, nor is it even the same aspect ratio. I deliberately chose 512 x 512 as the buffer size, as many mobile devices only support power of two texture dimensions, which is important. Having experimented on a few android devices, I've found that there's a good chance the water will just appear as a black, empty hole when using textures or frame buffers with non-power of two dimensions. On the other hand you can probably use any resolution you like if you're targeting modern desktop hardware, which has the advantage that the quality of the effect will be much greater if you use a buffer resolution which matches the resolution of the render window.
    Once the buffer is set up we need to draw the scene to it. In the render() function, before the call to clear() add:

//update the refract buffer
auto defaultBuffer = m_refractBuffer->bind();
auto defaultViewport = getViewport();
setViewport(gp::Rectangle(bufferSize, bufferSize));
 

clear(CLEAR_COLOR_DEPTH, clearColour, 1.0f, 0);
m_scene->visit(this, &WaterSample::m_drawScene, false);

Calling bind() on the buffer calls the internal OpenGL bind function, meaning that any drawing we do now will happen on the refraction frame buffer, because it is now the currently bound object. We also store the result from bind() as it returns a pointer to the previously active buffer (the main window) which we need so we can restore it immediately after updating the refraction buffer. We also store the previous viewport for the same reason.
    Because this is the refraction pass of the effect, we don't actually want the water plane rendered on the refraction buffer. Adding a boolean parameter to the m_drawScene() function allows us to decide whether or not the water plane is included during the scene visit. Now we can clear() the buffer and visit() the scene, so that the scene is rendered to the refraction buffer, remembering to pass false as a parameter to visit(). When this is done, restore the previous buffer by calling its bind() function, and restore the viewport. Then we can draw the scene to the main window normally, including drawing the water plane by passing true to the scene's visit() function.
    After drawing the scene, we can use the sprite batch to draw a small preview of the refraction buffer:

if (m_showBuffers)
{
    m_refractBatch->start();
    m_refractBatch->draw(gp::Vector3(0.f, 4.f, 0.f), gp::Rectangle(bufferSize, bufferSize), gp::Vector2(426.f, 240.f));
    m_refractBatch->finish();

}

The parameters to the sprite batch draw() function allow us to define the source and destination rectangles of the refraction buffer's texture, as well as the scale. This is fortunate because it means that, even though the frame buffer has a resolution of 512 x 512, we can size and stretch the image to anything we like, as well as place it in the top left hand corner of the screen. m_showBuffers is a boolean member which can be toggled via keyboard input, providing the option to hide the preview. In the example source code I've chosen to use the space bar. Compile and run the program and you should see the now familiar scene, with a slightly smaller version drawn in the corner:



Now that the rendering and preview window is set up, it's time to modify the shader used to render textured part of the scene, so that we can clip everything above the water level. GLES doesn't support glClipPlane, but we can still clip the output in the fragment shader using the equation Ax + By + Cz + D = 0 to represent the plane. To get the height of the water plane, find the Water node using the scene's findNode() function, right after loading the scene in initialise(). The height is the node's Y translation, which we can store in a member variable m_waterHeight. Next add a four component vector (Vector4) member m_clipPlane. This will be used to store the plane description according to our equation, and pass it to the fragment shader. We need to use a member variable here, as the plane needs to be set to zero when rendering the main scene so that the main scene doesn't get clipped (and, later, reversed when rendering the reflection pass). This way we can bind the vector to a uniform in the shader via a private member function which simply returns a const reference to m_clipPlane.
    In the render function, before drawing the refraction pass add:

m_clipPlane.y = -1.f;
m_clipPlane.w = m_waterHeight;

This describes our water plane as facing downwards (the first three components represent a normal vector pointing from the face of the plane) with the fourth component describing the height in world units. While the vector is set to this value clipping will be performed by the shader using this plane. As we don't want to clip any of the main scene reset the plane

m_clipPlane = gp::Vector4::zero();

before drawing it.
    You won't see any clipping yet, however, as we need to modify the default Textured vertex and fragment shaders provided by Gameplay. In order to make plane clipping optional I took advantage of the define system Gameplay uses, so that adding CLIP_PLANE to the defines line of the watersample.material file enables clipping on meshes which use the Textured material. In the vertex shader we need to add two new uniforms

#if defined (CLIP_PLANE)
uniform mat4 u_worldMatrix;
uniform vec4 u_clipPlane;
#endif


so we can pass the clip plane into the shader. We also need a new varying so that the calculated clip distance can be passed to the fragment shader:

#if defined(CLIP_PLANE)
varying float v_clipDistance;
#endif


and, finally, in the main function:

#if defined(CLIP_PLANE)
v_clipDistance = dot(u_worldMatrix * position, u_clipPlane);
#endif

Taking the dot product of the current vertex in world space with the clip plane returns distance from the clip plane to the current fragment. This means that after adding the corresponding varying variable to the fragment shader it only takes a quick comparison to check whether or not to discard the current fragment:

#if defined(CLIP_PLANE)
if(v_clipDistance < 0.0) discard;

#endif

The modified shader will now use the plane described in u_clipPlane to decide where the current fragment lies, and discard it if necessary. This test is done right at the beginning of the fragment shader's main function, as there's no point doing any other processing on a fragment if it is to be discarded. Finally we need to bind the value of m_clipPlane in the project's code to the shader's u_clipPlane uniform. In the initialise() function after loading the scene, find the Ground node's model's material

auto groundMaterial = m_scene->findNode("Ground")->getModel()->getMaterial();

and then bind the function we created earlier to the the u_clipPlane parameter like so:

groundMaterial->getParameter("u_clipPlane")->bindValue(this, &WaterSample::m_getClipPlane);

Phew. That's a lot to get your head around in one go. If you're a bit lost here's a brief rundown of what we did:

  • Stored the water plane's height by retrieving it from the scene node
  • Created a four component vector m_clipPlane to describe the clipping plane
  • Added a private function m_getClipPlane() which returns a const reference to m_clipPlane
  • Updated the clip plane's parameters during rendering
  • Modified the Textured vertex and fragment shaders to discard fragments based on the plane's value
  • Bound m_clipPlane's value to the shader in initialise() via the new function m_getClipPlane()

If you're still a little lost study the article's source code carefully. We'll be using most of this again later, when rendering the reflection buffer. If all went well compiling and running the project should present you with something similar to this:


    Now that the buffer is ready, we want to draw it on the water plane itself, which we can do by projecting it via the camera's WorldViewProjection matrix. In the article's source folder there are two shaders: watersample.frag and watersample.vert. The water material definition in watersample.material has also been updated to use these new shaders.
    At the moment the watersample vertex shader is pretty simple. It takes the incoming vertex position, multiplies it by the current WorldViewProjection matrix and assigns it to gl_Position, which is standard GLSL. It also assigns the value to a varying, v_vertexRefractionPosition, so that it is passed to the fragment shader. The fragment shader has a single sampler uniform to which we bind the texture of the refraction frame buffer in initialise(), right below were we bound m_clipPlane to the textured material:

auto waterMaterial = m_scene->findNode("Water")->getModel()->getMaterial();
auto refractSampler = gp::Texture::Sampler::create(m_refractBuffer->getRenderTarget()->getTexture());
waterMaterial->getParameter("u_refractionTexture")->setSampler(refractSampler);
SAFE_RELEASE(refractSampler);

The incoming vertex coordinates are in clip space with a range of -1.0 - 1.0, and need to be converted into normalised device coordinates (0.0 - 1.0) first, which is performed by the function fromClipSpace() in the fragment shader. The coordinates can now be used to sample the texture in the normal way, via texture2D(). For now we just output the result directly to gl_FragColor, which results in this:



The texture from the refraction frame buffer is rendered onto the water plane as if it were projected from the camera (if you've done shadow mapping before then you've done projected mapping. This is the same principle, only we're projecting the image from the camera, rather than a light source). This doesn't look very impressive, possibly even a little worse as the lower resolution of the refraction buffer has blurred the output slightly, but we've made an important step towards the water effect. More importantly we've updated the Textured shader, and begun a new water shader, learning about texture projection along the way. These techniques are integral and will be built upon in the next part; rendering the reflection pass.

Part Three

References:
Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Previous Parts:
Part One

Tuesday, 9 September 2014

Water in OpenGL and GLES 2.0: Part 1 - Introduction

In an effort to bolster content on the blog I've been working on a four part article describing  a water effect rendered in OpenGL, aimed mainly at games. There are a few sources on this topic already, to many of which I referred while writing the article, but I hope this particular instance will stand out in a couple of ways. Firstly this water effect is compatible with GLES 2.0, meaning it runs on a fairly wide selection of mobile devices. Secondly I wrote this specifically with the Gameplay3D framework in mind, although the techniques should be portable to other software / libraries. For those of you unaware, Gameplay3D is a cross-platform framework written in C++ with the aim of supporting the creation of games, particularly on mobile devices. It's quite mature now, actively developed by a group of professional developers at Blackberry, and is open source under the Apache 2.0 license. The main site is here, and the repository can be found on github. The 'next' branch is, in my opinion, stable enough to use and to take advantage of all the bug fixes over the current 2.0 release.
    In this first part I won't go into setting up a new Gameplay project, as everything you need to know can be found on the official wiki. Instead I'll skip ahead to setting up a 3D scene within which the water effect can be developed, before outlining the general theory behind what this article is trying to achieve. I have made the full source code for the project (bar the Gameplay library itself) available on github, which is also linked at the bottom of the page. Actually implementing the effect is covered over the next three parts.

 Assuming you have a new Gameplay project set up, you'll need to grab the article's source code from the repository. If you're developing on Windows Gameplay uses Visual Studio 2013 by default, which offers pretty decent C++11 support. I have chosen to take advantage of this in the article source code, so if you plan to compile any of the source on another platform, you will need to make sure C++11 is available. I have tested GCC 4.8 on Linux, and clang on Android and have found that they both work. If you want to use OSX or iOS then you'll have to experiment yourself, but I am led to believe C++11 is supported. Once your environment is set up and you have an empty template project, you can either replace it with the WaterSample.h and WaterSample.cpp from the source, or start developing your own alongside the article, using the source as a reference. I'll not be going through all of the code line by line, so if something appears to be missing from the explanation it is worth looking at the article source code. The repository also contains all the assets and resources used in this article, if you don't want to create your own.

Gameplay uses manual reference counting of shared objects so, in order to prevent memory leaks, it is important to make sure all of the objects in the code are allocated and released properly. The default Gameplay project for Visual Studio includes a DebugMem build which I highly recommend using. If any resources are not properly freed on program exit then, when using this build configuration, any reference counted objects still in memory will be reported in the debug window. The general rule of thumb is that any objects created with a Class::create() function must be freed or have their reference count updated with SAFE_DELETE or SAFE_RELEASE. Any pointers retrieved via find or get functions do not need to be updated, however.
    Probably the most crucial functions in any Game derived Gameplay class are the initialise() and finalise() functions. These are where resources which live as members of the class should be created and destroyed, and is where we'll load our scene. To set up the article scene make sure that the pond.gpb and water_sample.png files from the article source are copied to somewhere in the res/ folder of your project's working directory. The pond.gpb is an optimised binary file containing two nodes making up the scene, and water_sample.png is used to texture them. You also need to make sure to copy the watersample.scene file and watersample.material file to the res/ directory as these are used to tell the framework how to load the nodes from the binary file, and how to apply the texture. You also need to copy the default set of shader files provided with Gameplay (if the setup hasn't already), or edit the material file to point to the correct directory. You can read more about configuration files on the Gameplay wiki.

The scene lives as a member of the class, m_scene and is loaded in the initialise() function with

gp::Scene::load(path/to/watersample.scene);

To avoid leaks remember to add

SAFE_RELEASE(m_scene);

to finalise(). Get used to this, as any class members created in initialise() will need to be released in finalise(). Objects local to initialise(), however, should be released with SAFE_RELEASE as soon as their usefulness is met. To be able to view the scene we will need a camera node with a camera attached to it. This camera will also allow navigation within the scene. The camera and its nodes are created in initialise():

m_cameraNode = gp::Node::create("cameraNode");
m_cameraNode->setTranslation(camStartPosition);

auto camPitchNode = gp::Node::create();
gp::Matrix m;
gp::Matrix::createLookAt(m_cameraNode->getTranslation(), gp::Vector3::zero(), gp::Vector3::unitY(), &m);
camPitchNode->rotate(m);
m_cameraNode->addChild(camPitchNode);
m_scene->addNode(m_cameraNode);

auto camera = gp::Camera::createPerspective(45.f, gp::Game::getInstance()->getAspectRatio(), 0.1f, 150.f);
camPitchNode->setCamera(camera);
m_scene->setActiveCamera(camera);
SAFE_RELEASE(camera);
SAFE_RELEASE(camPitchNode);


Notice m_cameraNode is a member variable, and so will need to be released in finalise(). camera and camPitchNode, however, only exist locally and so are released as soon as we are done modifying them. The camera makes use of two nodes; m_cameraNode allows the camera to be yawed, that is rotated around the Y axis, as well as be translated in the scene. Its child node camPitchNode is used to pitch the camera up and down.
    To be able to actually see the scene on screen we need to add a call to clear() to the render() function, before visiting the scene with

m_scene->visit(this, &WaterSample::m_drawScene);

This function takes a reference to the m_drawScene() function, which it then calls on each node in the scene, the implementation of which is taken directly from the Gameplay wiki. With this added we should have the bare minimum to compile and run the project, which should display something like this (the view angle may differ depending on how your camera was initialised):

The scene contains a textured mesh, which is attached to one of the scene nodes, and a flat blue plane attached to a second node, which will eventually become the water. Unfortunately we can't yet move the camera, which would be nice to have when we look at the water later on, so let's add that first.
    The Game class (from which our project is derived) also has a set of virtual functions used to handle input events. We'll override two of them, mouseEvent() and keyEvent(), and use them to rotate the camera and to modify a bitmask, m_inputMask. Then, in the update() function, we use the state of this bitmask to apply a force to m_cameraNode, moving it around the scene.

First event handling:
bool WaterSample::mouseEvent(gp::Mouse::MouseEvent evt, int x, int y, int wheelDelta)
{
    switch (evt)
    {
    case gp::Mouse::MOUSE_MOVE:
    {
        auto xMovement = MATH_DEG_TO_RAD(-x * mouseSpeed);
        auto yMovement = MATH_DEG_TO_RAD(-y * mouseSpeed);

        m_cameraNode->rotateY(xMovement);
        m_cameraNode->getFirstChild()->rotateX(yMovement);
    }
        return true;
    case gp::Mouse::MOUSE_PRESS_LEFT_BUTTON:
        m_inputMask |= Button::Forward;
        return true;
    case gp::Mouse::MOUSE_RELEASE_LEFT_BUTTON:
        m_inputMask &= ~Button::Forward;
        return true;
    case gp::Mouse::MOUSE_PRESS_RIGHT_BUTTON:
        m_inputMask |= Button::Back;
        return true;
    case gp::Mouse::MOUSE_RELEASE_RIGHT_BUTTON:
        m_inputMask &= ~Button::Back;
        return true;
    default: return false;
    }

    return false;
}


void WaterSample::keyEvent(gp::Keyboard::KeyEvent evt, int key)
{
    if (evt == gp::Keyboard::KEY_PRESS)
    {
        switch (key)
        {
        case gp::Keyboard::KEY_ESCAPE:
            exit();
            break;
        case gp::Keyboard::KEY_W:
        case gp::Keyboard::KEY_UP_ARROW:
            m_inputMask |= Button::Forward;
            break;
        case gp::Keyboard::KEY_S:
        case gp::Keyboard::KEY_DOWN_ARROW:
            m_inputMask |= Button::Back;
            break;
        case gp::Keyboard::KEY_A:
        case gp::Keyboard::KEY_LEFT_ARROW:
            m_inputMask |= Button::Left;
            break;
        case gp::Keyboard::KEY_D:
        case gp::Keyboard::KEY_RIGHT_ARROW:
            m_inputMask |= Button::Right;
            break;
        }
    }
    else if (evt == gp::Keyboard::KEY_RELEASE)
    {
        switch (key)
        {
        case gp::Keyboard::KEY_W:
        case gp::Keyboard::KEY_UP_ARROW:
            m_inputMask &= ~Button::Forward;
            break;
        case gp::Keyboard::KEY_S:
        case gp::Keyboard::KEY_DOWN_ARROW:
            m_inputMask &= ~Button::Back;
            break;
        case gp::Keyboard::KEY_A:
        case gp::Keyboard::KEY_LEFT_ARROW:
            m_inputMask &= ~Button::Left;
            break;
        case gp::Keyboard::KEY_D:
        case gp::Keyboard::KEY_RIGHT_ARROW:
            m_inputMask &= ~Button::Right;
            break;
        }
    }
}


And then the update() function:
void WaterSample::update(float dt)
{
    //move the camera by applying a force
    gp::Vector3 force;
    if (m_inputMask & Button::Forward)
        force += m_cameraNode->getFirstChild()->getForwardVectorWorld();
    if (m_inputMask & Button::Back)
        force -= m_cameraNode->getFirstChild()->getForwardVectorWorld();
    if (m_inputMask & Button::Left)
        force += m_cameraNode->getRightVectorWorld();
    if (m_inputMask & Button::Right)
        force -= m_cameraNode->getRightVectorWorld();

    if (force.lengthSquared() > 1.f) force.normalize();

    m_cameraAcceleration += force / mass;
    m_cameraAcceleration *= friction;
    if (m_cameraAcceleration.lengthSquared() < 0.01f)
        m_cameraAcceleration = gp::Vector3::zero();

    m_cameraNode->translate(m_cameraAcceleration * camSpeed * (dt / 1000.f));

}

Using the forward and right vectors of the camera nodes we can calculate a direction vector in world coordinates, which is then applied as a force using Newton's second law of motion: f = ma, or force = mass * acceleration. The constant values mass and friction can be found at the top of the .cpp file in an anonymous namespace, where I prefer to group any constant values. Compile and run the scene and you should find that you can now move the camera similarly to a first person shooter, looking around with the mouse and moving using either the cursor keys or W, A, S and D.

 So now that we have a scene set up and ready to get wet, let's take a moment to look at the theory behind applying the water effect. Firstly, this is purely a visual effect, no physics are involved, and, secondly, this is meant to supplement the atmosphere of a game, particularly with mobile development in mind, so there'll be no interaction with the water. It is a relatively cheap effect, and can be taken much further beyond this article as it provides the basis for more advanced effects, such as those which are seen in game engines like Source.

The effect is a multi-pass effect, composed of three scene renders per frame, two of which are done to off-screen buffers. The first render is used to create the refraction of the water. Everything in the scene above the water line is clipped and the remaining fragments rendered to a frame buffer.

The second render is used to create the reflection. The scene is this time clipped below the water line, and also inverted vertically.

 Finally the third pass renders the two images to the screen, blending them via a special term calculated from the current view position, and distorted using a normal map to give the appearance of waves.

While this theory is pretty general, the implementation varies between platforms, languages and even libraries. In the second part of this article I'll explain how to set up a frame buffer in Gameplay, and use it to render the refraction pass. I'll also explain about how the image is projected onto the water plane of the scene via GLES compatible shaders. In part three I'll extend this technique to render the reflection pass, and in the final part cover blending the passes, as well as improving the overall effect with some basic animation.

Part Two

References:
Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page