Tuesday, 30 June 2015

A new game - Pseuthe

UPDATE: You can now download the latest binaries for Windows and Linux from itch.io !

So I've been busy beavering away behind the scenes (and neglecting this blog) working on a casual game written using everyone's favourite C++ library; SFML. Pseuthe (pronounced 'soothe') started out as an experiment in Newtonian physics, but I was soon elbow deep adding the prettiest graphics I could muster, as well as some semblance of gameplay. You take on the role of a deepwater plankton, feeding on all the happy microbes you can find before your being winks out of existence. Last as long as you can, and don't eat any of the bad microbes (highlighted by their red markings). Here's a video to give you some idea of what it's all about:

I'm still tweaking things here and there, so there are no binaries available, but it is open source so you can compile it yourself. The repository contains a Visual Studio 2013 solution to get you started on Windows, and I've tested the CMake / make files with gcc on linux (mainly Ubuntu and Arch). It should compile on OSX too, but I don't have a machine to test it on. If anyone wants to contribute OSX support to the CMake file, I'll gratefully accept any pull requests ;)

Tuesday, 21 April 2015

Tmx loader for SFML - an update!

It's been a while since I last updated my first and, apparently, most popular project but I've finally found some time to sit down and tackle the ever increasing list of issues. I've not fixed everything, but have made some significant improvements, so here they are in brief:

New debug levels. It was brought to my attention that the loader doesn't need to be so verbose all the time, so I've made debug output more flexible. Firstly the output of debug messages can be decided with three defines:


These allow messages to directed to either the console, a log file on disk, or both. Logging is skipped altogether if none of these are defined. This allows flexible logging between different builds, such as Debug and Release for example. The log class also has a function which allows setting a series of flags to define the level of logging, with Information, Warning and Error levels available. These are set via

Logger::SetLogLevel(Logger::Warning | Logger::Error);

and can be set at any point in code.

Texture fixes. There was a small but rather daft error which meant adding new textures to a loaded map would invalidate existing texture references. This has now been fixed.

Rendering optimisation. Large maps which contained a lot of TileQuad data would run abhorrently, due to the fact the renderer would check *every* TileQuad each frame for any updates. This was a silly mistake, and I've improved performance twenty-fold in some cases by listing dirty quads and only updating those. This means maps with a lot of moving tiles, such as map objects with attached textures, will suffer performance drops proportionally to the number of moving objects. In extreme cases it's probably better to handle such game entities outside of the map loader inside a scene graph or physics world for example (consider that all tiles will be updated, even when off screen. This would be much better optimised if handled elsewhere). Maps which are mostly static will draw much faster now, however. Vertex arrays which cover only a small portion of a map are also culled if not visible on screen. This gives a small performance boost, particularly to maps with a lot of layers or tile set textures.

The updates are all available via the source on Github, and the Readme contains more information on using the debugger output.

Wednesday, 18 February 2015

Crush! Breakdown Part 2 - Collision World

In part one of the Crush! breakdown I outlined the scene graph used to transform and render game entities, as well as some of the controller classes used to manipulate nodes within the scene. Entities are created by attaching a combination of components to a node, one of which is the collision component - used to make the nodes react to external forces and resolve collisions between the bodies. My instinct for this kind of thing is usually to reach for a library such as chipmunk or box2d, but experience has taught me that this is often overkill for a small project, and will probably not yield great results if the game isn't inherently physics based (such as Angry Birds for example). On top of this I'm always willing to investigate new areas of programming, so giving the subject of physics and collision handling some serious study was an idea which appealed to me. To prevent presenting myself with a daunting scope I tried to reduce the amount of complexity as much as possible by looking carefully at what would actually be needed. After some cogitation I decided that all I needed were rectangular bodies with no rotation which reduced the scope dramatically when considering what would be needed for collision detection. Normally I would give body properties mass so that each body's acceleration could be calculated by the current force acting upon it, using Newton's second law of motion: force = mass * acceleration. Even this could be reduced so that a collision body only needed a velocity vector, a position, and a bounding box.

struct CollisionBody
    vector2 m_velocity;
    vector2 m_position;
    floatRect m_boundingBox;

Each frame the body's position is updated by adding the current velocity to it. The velocity is adjusted by either applying an external force (by adding another vector to the velocity), or as a result of a collision. Collision bodies exist within a CollisionWorld class, which is responsible for creating bodies, detecting collisions between them, and applying any resulting forces. The CollisionWorld class also acts as one of the controller classes, and so the instance lives alongside the other controllers in the GameState class. Apart from the factory functions of the CollisionWorld class, the beef of the code exists inside:

CollisionWorld::update(float dt)
    //test which bodies intersect and mark as collision pair
    for(const auto& bodyA : m_bodies)
         for(const auto& bodyB : m_bodies)
            if(bodyA.get() != bodyB.get())
                    m_collisions.insert(std::minmax(bodyA.get(), bodyB.get()));


    //for each collision pair calculate manifold and resolve collision
    for(const auto& pair : m_collisions)
        auto manifold = getManifold(pair);
        manifold.z -= manifold.z;

    //apply gravity to each body and perform a physics step
    for(auto& body : m_bodies)

The update function is performed in to three main steps. First each body is tested against the others for intersection using its bounding box. If there is an intersection the pair of bodies are inserted into a std::set using std::minmax() which makes sure that each pair is inserted only once. Potentially this step could be optimised with some kind of spatial partitioning to make sure bodies are only tested against other nearby bodies, but for a small game it wasn't needed, and I omitted it for the sake of simplicity.
    The second step is to calculate the collision manifold of each intersecting pair. The manifold contains a normalised vector perpendicular to the intersected surface, and a value stating the depth of intersection. This is usually the minimum it takes to resolve a collision between two objects - and in rectangular-only collision is vastly simplified by the fact that there can only be one of four possible normal vectors for each side of the rectangle, reducing the complexity of manifold calculation dramatically. In my calculation function I took advantage of the fact SFML's rectangle class returns the intersection area as a new rectangle, and you can see the full implementation here. Handily I could fit the two component normal vector along with the penetration depth into a single sf::Vector3, which made it an easy value to pass around. If you're interested in manifold generation for more complex interactions, there is an interesting article here which I found worth reading. The last step of the CollisionWorld update function applies a pre-defined gravity force to each body (the gravity value is passed to the CollisionWorld constructor), and executes each body's step() function.

Each body has two important functions, resolve() and step(). The resolve() function is used to decide how the body should react to the collision manifold data. As each body type needs to react slightly differently this is where behaviour customisation is applied. Each body has a currently active state defining its behaviour at that point in time. I took this idea from the state pattern (again from Game Programming Patterns), and created a BodyBehaviour class from which body type specialisations are inherited. This allows collisions to resolve themselves in specific ways, such as water being absorbent, or the ground being solid - as well as giving bodies the opportunity to raise body specific events. When a player body is destroyed then a PlayerDied event can be raised and so on. Physics values such as gravity and friction can also be intercepted by the active behaviour and modified if necessary, velocity vectors reflected about the manifold normal data or penetration values negated.
    The body step() function then applies any changes made by the resolve() function by integrating the current time step with the body's velocity, and then moving the body. It also does some simple bounds checking and moves any bodies which may have tunneled out of the play area to a reasonable place at the top of the world. If the body is attached to a node in the scene graph then that node's position is then updated in the scene. Here is a video of the initial physics setup:

The red blocks are solid bodies, green are enemies, and blue is the player.

This is a very simplified overview of the collision system in Crush!, admittedly, but it's very difficult to go very far into detail in a single post. Crush! is open source, however, and the full collision code can be seen here.

Tuesday, 17 February 2015

Crush! Breakdown Part 1 - Scene setup

After spending any amount of time on a project I like to look back over what I did and break it down in a kind of postmortem. I find some retrospect helps me to consider what it is I've done wrong so that I can learn from the experience before moving on. The starting point for this particular project was creating a scene and getting it rendering in a modular and extensible way. I'm actually pretty pleased with what I did, although I do wish I had considered networking support early on in design, as it really needs to be baked in from the start (and hence will probably never be added to Crush!).
    I started with the concept of a scene graph - a series of parented nodes which allow draw calls and transformations to be passed across siblings and down through children. Scene graphs are well documented and there are resources all over the internet for learning about them, so I'll not go too far into detail here. There is even an example in the SFML book, which I used as a starting point as I was using SFML as the main library. The scene graph in Crush! varies from the book's implementation, however, in that instead of relying on inheritance for distinguishing node types it tries to take a component based approach. The scene graph exists to maintain the transformations of each scene node, and allows rendering of the scene in as compact way as possible. The actual drawing of node representations and the behaviour of nodes (including how they are transformed in the scene) is left to those components, so each node can behave independently, depending on the collection of components which are attached to it. The entire node graph is kept inside a class which represents a scene. This class is essentially the root node of the graph, with a few key differences to implement ownership semantics. As well as the graph the Scene class owns any lights which may exist, as well as cameras used for rendering the scene. This means the Scene class can properly set up any views and perform lighting calculations each frame, before moving down the graph and rendering any drawables attached to the nodes. Internally the Scene's draw function looks a bit like this (in pseudo code):

Scene::draw(RenderTarget rt, States states)
    states.shader = m_lightingShader;
    for(const auto& l : m_lights)
        m_lightingShader.setParameter(lightParam, l.property);

    m_sceneGraph.draw(rt, states);

I'll cover the actual lighting and camera set up in another post. For now it's enough to know that the scene is responsible for the lights and cameras which have been created (and attached to nodes as part of the component strategy) which it then uses to update the shader system each frame, before drawing the scene graph.This also nicely encapsulates much of the rendering, so that externally the entire scene can be drawn at once. The overall structure of the game uses a state stack, and the current scene is a member of 'GameState' (other states being PauseState, MenuState and so on), so when the GameState is created a new scene is built from information loaded from a map file, and drawn with

m_scene.draw(m_renderWindow, states);

With little else to do to the scene the code in GameState can be kept relatively clean and easy to read, with all the implementation details tucked away inside the Scene class.

Using a component based approach with the scene nodes also helps with this, as each component can belong to a parent class where its details are relevant, without cluttering up the Node class itself. I already stated that cameras and lights are components, the Camera class being not much more than a wrapper around sf::View, and Light a small struct which contains colour and falloff values. These can be attached to nodes so that they take on any transformations as the node moves around the scene, without having to keep their own transform specific data. Lights and cameras both only exist within the scene so it makes sense that the Scene class should own all the instances, and pass out references to them should they need to be modified. For example:

auto& light = m_scene.createLight(colour, falloff);
Node::Ptr node;
//do other stuff to node

The same goes for cameras. Internally the Scene class creates the light (or camera), providing that certain criteria are met, and returns a reference to it so that it can be modified if necessary, and then attached to a node. Node::Ptr is a typedef for std::unique_ptr<Node> as nodes will usually need to be dynamically added and removed from the scene as game play progresses. Adding the node to the scene allows the Scene class (and, internally, the scene graph) to take ownership of the node, which is why I use a unique_ptr as opposed to a shared_ptr.
    The other main components used in the game are drawables - classes which inherit sf::Drawable, and collision bodies. I used sf::Drawable as the class type for rendering nodes rather than sf::Sprite, as most of the drawables I used were custom classes, such as the water effect or animated sprites. Collision bodies represent a very basic physics engine which only supports rectangular collision detection with no rotation. For this game it was enough, and using a full blown system such as Box2D seemed overkill. It also meant that I didn't have to worry about unit conversion and could keep all values within a single domain. Collision bodies all belong to a 'CollisionWorld' class which takes care of all the physics simulation, collision detection and collision resolution. Bodies can be requested from the CollisionWorld in a similar way to how lights/cameras are requested from the Scene class. The returned bodies remain owned by the collision world, and references are attached to the nodes as needed, so that the nodes can be transformed and updated as bodies move around the world and interact with each other. I'll go into full detail of the CollisionWorld class in another post.

This was enough to create a renderable scene, and make the scene nodes interactive whilst remaining reasonably decoupled from the other classes. It sat in my mind as having a set of playing pieces laid out on a game board, ready for the player to command. To be able to manipulate these playing pieces without directly hooking up too much code, I turned to the observer pattern, as described in Game Programming Patterns by Robert Nystrom (although he's far from the first to write about it of course). Here is, perhaps, where I would make a slight change in hindsight. While the pattern worked very well it did become a little spaghetti-like in places, and it isn't always obvious which classes are observing which - in future I would perhaps replace this pattern with a message bus. In this instance, though, I followed through with the idea of the scene nodes being 'observable', watched by a set of 'controller' classes with the ability to manipulate the playing pieces, each responsible for their own part of the game. Controller classes include the player, the scoreboard, the audio system, the physics world and a controller responsible for map data loaded from an external file. This also provided handy encapsulation for drawable objects, such as the player controller looking after an animated sprite and making sure the correct animations are played, or the map controller being responsible for creating the world geometry which makes up the scenery. Each of these controllers can then provide a reference to drawable items which are attached to the corresponding nodes in the scene graph. The scene graph needs to know nothing of the internal implementation of these drawables, only how to draw them. The controllers need to know little if nothing of each other either, they just watch the scene and wait for events to be raised via the observer pattern. If an event is pertinent to a particular controller, then that controller will act on it. For example if a PlayerDied event is raised then the audio controller will play a specific sound, the scoreboard controller will reduce the number of lives and points, and the player controller will reset the player's position.
    Finally, to enable the controller classes to manipulate the scene, I used a command queue which is more or less identical to that found in the SFML Game Development book. Each controller class keeps a reference to the command stack, so that when it needs to update the scene it can create a command targeted at a specific node or set of nodes, and place it on the stack. At the beginning of each frame the entire command stack is executed so that the scene is updated. After which the collision / physics world is updated and collisions resolved, the controllers respond to any events raised by the updated state, before the entire scene is then drawn. The flow within the GameState class then looks like this:






m_scene.draw(m_renderWindow, states);

Crush! is open source, so if you want to take a look at the final implementation, or just have a play you can get it from the Github page.

Part 2: Collision

Monday, 16 February 2015

Crush! A 2D platformer made in SFML

Yet again many months seem to slip by while this blog goes neglected.. although this time for good reason! Pretty much ever since my last post (which has since been integrated into the official Gameplay samples :D) I have been hard at work on a two player competitive platform game named Crush. The aim of the game is to crush all the bad guys by dropping or sliding heavy crates into them and, in two player, crush more than your opponent whilst vying for precious time with the Magic Hat! The longer a player wears the hat the more points they are awarded at the end of the round. Here's a (slightly outdated) video of it in action:

The game itself is far from complete, but I've decided to release it open source in its current state to get some wider opinion on it. Currently the source is available from Github, although there are no binaries yet. The windows version (assuming you choose to use the included Visual Studio project) also includes source written in C# / .net 4 for a level editor and sprite sheet animation data editor. I've made a short video which briefly covers how they work:

In the vein of my previous project Space Racers I plan to write a few blog entries about the code design and how the mechanics of the game work, as well as perhaps reflecting on what I haven't got right, and how I'll address that next time round. As usual all feedback is welcomed, there's a thread on the SFML forums here.

The first part of the breakdown is now available here.

Friday, 12 September 2014

Water in OpenGL and GLES 2.0: Part 4 - Blending it all together

If you've been following the previous three parts of this article then by now you must be itching to see how the fruits of your labour are going to look, so let's dive right in. To get a hint of the final outcome we can modify the watersample.frag file so that gl_FragColor is a straight blend of the reflection and refraction images:

gl_FragColour = mix(refractionColour, reflectionColour, 0.5);

This performs a 50/50 mix of the two images, with a final result which looks like a slightly odd frozen lake.

This is nice, but we can do better! For one thing the amount of blending each fragment receives should vary based on the perceived angle of the camera's eye position relative to any given point on the water plane. That is, the more directly we look at the water, the more transparent, and the more of the refraction image should be shown, and, conversely, the shallower the angle of observation the more reflective the surface should be. This is done by approximating the Fresnel term, a floating point number calculated for any given fragment, which replaces the constant 0.5 value in the mix() function. There are a variety of methods of doing this, all of which (as far as I can tell) require a normal vector representing the water's surface normal at any given point - so that we can measure the angle between the camera's viewpoint and the fragment by taking the dot product of the eye position with the normal vector. To start with we could use a single up facing vector which represents the entirety of the plane, but here is a good opportunity to add some extra detail to the water's surface.
    Using a normal map we can store a whole range of normal vectors, mapped across the surface of the plane, each representing a slightly different angle producing a perturbation of the surface. As an added bonus the red  and green channels of the normal map can be used to create a slight distortion in both the reflection and refraction images, adding another level of detail.
    To map the normal texture to the water plane we need to do some modifications to the watersample shaders. First we need to add the texture coordinate attribute a_texCoord to the vertex shader, which is automatically passed in by Gameplay. Then in the main() function pass the value directly to a new varying variable v_texCoord so that it is available in the fragment shader. As well as adding the new v_texCoord to the fragment shader, we also need to add a sampler uniform u_normalMap so that we can pass in the normal texture. To bind the actual texture to the uniform we don't actually need to do anything in the project's code. Gameplay provides a nice auto-binding mechanism, allowing us to pass the texture in simply by editing the watersample.material file. Add

sampler u_normalMap
        mipmap = true
        wrapS = REPEAT
        wrapT = REPEAT
        minFilter = LINEAR_MIPMAP_LINEAR
        magFilter = LINEAR

        path = res/images/water_normal.png


to the material water definition, or look at the article source code for part four. Assuming the path points to a valid image file the texture will automatically be loaded and bound to the shader when the program starts. Once this is all set up we can return to the fragment shader, and start using the normal data stored in the texture.
    Immediately in the main() function we sample the normal map, and convert it to normalised values:

vec4 normal = texture2D(u_normalMap, v_texCoord * textureRepeat);
normal = normalize(normal * 2.0 - 1.0);

textureRepeat is a constant value which allows tiling of the texture to better fit the water plane. Set it to 2.0 to make the texture repeat twice in both the S and T direction, 12.5 to make it repeat 12.5 times and so on. Before we start calculating any reflection and blend parameters, let's add some distortion to the output.

//distortion offset
vec4 dudv = normal * distortAmount;

//refraction sample
vec2 textureCoord = fromClipSpace(v_vertexRefractionPosition) + dudv.rg;
textureCoord = clamp(textureCoord, 0.001, 0.999);

distortAmount reduces the amount of distortion added, as too much can easily ruin the effect, and is typically a small number such as 0.05. The red and green values of dudv are then added to the texture coordinates, offsetting them slightly, before clamping the coordinates within a reasonable range. The refraction texture is then sampled in the normal way with the newly offset coordinates, and the process repeated for the reflection texture. The output should now be a nice wavy distorted image (assuming you're using the normal map texture supplied with the article source. You can use any normal map texture you like).

After the reflection and refraction textures have been sampled, we are now ready to approximate the fresnel value, and use it to blend the textures together. To do this we need the eye position relative to the current vertex, so we can take at dot product of it with the current normal value. The watersample vertex shader needs two new uniform variables

uniform mat4 u_worldMatrix;
uniform vec3 u_cameraPosition;

and a new varying

varying vec3 v_eyePosition;

so that the calculated position can be passed along to the fragment shader. Gameplay provides the worldMatrix and cameraPosition values for us as standard, and we can auto bind these in the material file the same way as we did the normal map, which saves having to modify the project code:

u_worldMatrix = WORLD_MATRIX
u_cameraPosition = CAMERA_WORLD_POSITION

Then, in the main() function of the vertex shader, we can calculate the eye position

v_eyePosition = u_cameraPosition - (u_worldMatrix * a_position).xyz;

With the eye position available in the fragment shader we can begin to use it to calculate the fresnel value. Before we can use it, however, the eye position needs to be converted to the tangent space coordinates used by the normal map (or we could just use an object space normal texture - but that would upset the distortion factor). Due to the fact the water plane is fixed horizontally we can use a set of constant vectors to represent the plane's normal, tangent and bitangent vectors (if the plane was oriented in any other way we'd probably have to pass these values in either as an attribute or a uniform value), and use them to move the eye position into tangent space

const vec4 tangent = vec4(1.0, 0.0, 0.0, 0.0);
const vec4 viewNormal = vec4(0.0, 1.0, 0.0, 0.0);
const vec4 bitangent = vec4(0.0, 0.0, 1.0, 0.0);

vec4 viewDir = normalize(vec4(v_eyePosition, 1.0));
vec4 viewTanSpace = normalize(vec4(dot(viewDir, tangent), dot(viewDir, bitangent), dot(viewDir, viewNormal), 1.0));

then create a reflected vector of the view and dot it with the normal to get our approximated fresnel term

vec4 viewReflection = normalize(reflect(-1.0 * viewTanSpace, normal));
float fresnel = dot(normal, viewReflection);

we now have our value to feed into the mix function:

gl_FragColor = mix(reflectionColour, refractionColour, fresnel);

Load up the scene and you should see the water really beginning to take shape. Moving around the scene you'll notice the blending of the reflection and refraction map change to match your view. One thing is still not right though, and that is the fact that the water is still apparently frozen. We can change this with a simple new uniform in the fragment shader

uniform float u_time;

This is simply going to be a floating point value which increases over time. In the article's source folder there is a small utility class called Timer, which abstracts the Gameplay clock, although you can use getGameTime() directly if you prefer. Create a private const function to return its value, preferably divided by some amount (else the animation will run waaay too fast), and use it to bind the elapsed time to the new shader uniform. In the fragment shader add the time to the coordinates of the normal map look up.

vec4 normal = texture2D(u_normalMap, v_texCoord * textureRepeat + u_time);

This will have the effect of offsetting the normal map texture, scrolling it across the surface of the plane, and creating a simple yet pleasing animation. If you get odd stretched lines across the surface make sure to check that the sampler settings in your water material have wrapS and wrapT set to repeat.

That pretty much sums up what I set out to describe in this article, but there is plenty more which could be added to improve the effect. For instance no lighting is taken into account in the fragment shader, which, once added, could also be used in conjunction with the normal map to calculate specular highlights on the surface of the water. The water also looks very clean too. It is entirely possible to calculate the depth of the water and blend it with a colour so that it appears darker and murkier the the deeper you go.

Here's a short video of the final version of the project, and the water effect running on my Moto G with Android 4.4.2

Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Previous Parts:
Part One
Part Two
Part Three

Thursday, 11 September 2014

Water in OpenGL and GLES 2.0: Part3 - Reflection

Continuing from the previous part of this article on creating a water effect in Gameplay3D, in this part we'll cover creating reflections on the surface of the water. It is important that you have read and completed part two, and that you have the refraction buffer drawing, previewed, and projected on to the water plane. This is because before we can continue we need to replicate the refraction buffer with a new member *m_reflectionBuffer, as well a new sprite batch *m_reflectBatch to draw the preview. Add these to the project, initialise them in the initialise() function, release and delete them in the finalise() function, and update the render() function so that the scene is drawn to the new reflection buffer, and the reflection buffer preview is drawn next to the preview window of the refraction buffer - all in the same way as the refraction buffer.
    Once you have the scene set up we can start to modify the process, so that instead of getting a duplicate of the refraction buffer, we actually get a reflection. Firstly modify the clip plane settings in the render function right before drawing the reflection buffer:

m_clipPlane.y = 1.f;
m_clipPlane.w = -m_waterHeight;

By inverting the normal direction and the plane height the plane now faces the opposite direction. When you compile and load the scene you should see in the preview window that the grass is kept, and that the bottom of the pond is clipped away instead. This is because we want to reflect the scene as it appears above the water. Next we need to consider how to invert the image vertically, as a reflection would appear in the water. A reflection isn't just the image as seen from the camera, only upside down, however. What we see is, in fact, what would be seen by a camera below the water plane, targeted at the same point as the scene's camera:

If the scene's main camera is camera A, then the reflection it sees is the same as if the scene were viewed from camera B. If you've been reading the reference articles linked at the bottom of these posts, you'll see each one offers its own implementation of this camera set up. If we were using raw OpenGL the preferable way would be to use a reflection matrix but, as this article is based around the Gameplay framework, the option is not particularly viable. An alternative would be to scale the entire scene in the Y axis by -1 during the reflection pass, which is possible, but has the drawback of not easily being able to store the WorldViewProjection matrix (more on this shortly). Finally we could create a second camera in place of camera B on the diagram, by taking the forward and right vectors of the scene camera, computing the cross product of the two vectors to find the up vector, and using them to compute a new LookAt matrix each frame, to orient camera B in the right direction. The latter seems a little heavy to do each frame so I settled on (perhaps controversially) creating a second camera in the scene, and having it follow the movements of the main camera, only mirrored about the water plane. In the initialise() function directly after creating the scene camera:

//add a second camera do draw the reflections
m_reflectCamNode = gp::Node::create("reflectCamNode");
m_reflectCamNode->setTranslation(camStartPosition.x, -camStartPosition.y, camStartPosition.z);

camPitchNode = gp::Node::create();
gp::Matrix::createLookAt(m_reflectCamNode->getTranslation(), gp::Vector3::zero(), gp::Vector3::unitY(), &m);

camera = gp::Camera::createPerspective(45.f, gp::Game::getInstance()->getAspectRatio(), 0.1f, 150.f);

This is pretty much a duplicate of the scene camera creation code although, crucially, the Y component of the start point vector is negated, so the the initial LookAt matrix is a reflection of that of the scene camera. Next we need to modify the mouse move event, so that the new camera's pitch movement is inverse to that of the scene's main camera, while the yaw remains the same.


And, of course, we need to make sure that it follows the translation of the main camera in the update() function

auto position = m_cameraNode->getTranslation();
position.y = -position.y + m_waterHeight * 2.f;

while making sure the Y position is reflected about the water plane by negating it, and adding the plane height multiplied by two. Now when drawing the scene to the reflection buffer we can switch cameras by making a copy of the active scene camera, making the reflection camera the new active scene camera, rendering the reflected scene and then restoring the original camera before drawing the final pass.
   The preview window for the reflection buffer displays the edges of the pond, as seen from below from the view of the reflection camera, and is ready to be projected onto the water plane. In part two of the article the last step was to project the refraction buffer on to the plane via the water shader. We need to do the same thing again, only this time we are using a different camera, so we need to use the corresponding WorldViewProjection matrix to generate the texture coordinates. While the reflection camera is active we can store the plane's WorldViewProjection matrix in a member variable

m_worldViewProjectionReflection = m_scene->findNode("Water")->getWorldViewProjectionMatrix();

This is important that we do this here because *the matrix is only valid while the reflection camera is active*, and is why we store it in a member variable. Adding a private function which returns a const reference to m_worldViewProjectionReflection will then allow us to bind it to the water shader in the same way as the other shader-bound variables which, hopefully, you should now be familiar with. All that's left to do, then, is modify watersample.vert and watersample.frag with uniforms for the new projection matrix and the reflection buffer sampler, in the same way in which we added the refraction buffer previously.

In the vertex shader:

uniform mat4 u_worldViewProjectionReflectionMatrix;
varying vec4 v_vertexReflectionPosition;


v_vertexReflectionPosition = u_worldViewProjectionReflectionMatrix * a_position;

and in the fragment shader we sample the reflection texture with the new coordinates

textureCoord = fromClipSpace(v_vertexReflectionPosition);    
vec4 reflectionColour = texture2D(u_reflectionTexture, textureCoord);

To see the result we can assign reflectionColour directly to gl_FragColor. Notice how, because we projected the texture as if it were from the reflection camera, the image is automatically flipped! You should have something which looks like a flat, glossy mirror, albeit with some slight artifacting due to the lower resolution render buffer.

Now we are most of the way there. The only things left to do are to blend the reflection and refraction maps in the watersample fragment shader, and add some animated waves to make the scene look a bit more natural. I will cover that in the next, and final, part of this article.

Part Four

Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Previous Parts:
Part One
Part Two