Friday, 12 September 2014

Water in OpenGL and GLES 2.0: Part 4 - Blending it all together

If you've been following the previous three parts of this article then by now you must be itching to see how the fruits of your labour are going to look, so let's dive right in. To get a hint of the final outcome we can modify the watersample.frag file so that gl_FragColor is a straight blend of the reflection and refraction images:

gl_FragColour = mix(refractionColour, reflectionColour, 0.5);

This performs a 50/50 mix of the two images, with a final result which looks like a slightly odd frozen lake.



This is nice, but we can do better! For one thing the amount of blending each fragment receives should vary based on the perceived angle of the camera's eye position relative to any given point on the water plane. That is, the more directly we look at the water, the more transparent, and the more of the refraction image should be shown, and, conversely, the shallower the angle of observation the more reflective the surface should be. This is done by approximating the Fresnel term, a floating point number calculated for any given fragment, which replaces the constant 0.5 value in the mix() function. There are a variety of methods of doing this, all of which (as far as I can tell) require a normal vector representing the water's surface normal at any given point - so that we can measure the angle between the camera's viewpoint and the fragment by taking the dot product of the eye position with the normal vector. To start with we could use a single up facing vector which represents the entirety of the plane, but here is a good opportunity to add some extra detail to the water's surface.
    Using a normal map we can store a whole range of normal vectors, mapped across the surface of the plane, each representing a slightly different angle producing a perturbation of the surface. As an added bonus the red  and green channels of the normal map can be used to create a slight distortion in both the reflection and refraction images, adding another level of detail.
    To map the normal texture to the water plane we need to do some modifications to the watersample shaders. First we need to add the texture coordinate attribute a_texCoord to the vertex shader, which is automatically passed in by Gameplay. Then in the main() function pass the value directly to a new varying variable v_texCoord so that it is available in the fragment shader. As well as adding the new v_texCoord to the fragment shader, we also need to add a sampler uniform u_normalMap so that we can pass in the normal texture. To bind the actual texture to the uniform we don't actually need to do anything in the project's code. Gameplay provides a nice auto-binding mechanism, allowing us to pass the texture in simply by editing the watersample.material file. Add

sampler u_normalMap
{
        mipmap = true
        wrapS = REPEAT
        wrapT = REPEAT
        minFilter = LINEAR_MIPMAP_LINEAR
        magFilter = LINEAR

        path = res/images/water_normal.png

}

to the material water definition, or look at the article source code for part four. Assuming the path points to a valid image file the texture will automatically be loaded and bound to the shader when the program starts. Once this is all set up we can return to the fragment shader, and start using the normal data stored in the texture.
    Immediately in the main() function we sample the normal map, and convert it to normalised values:

vec4 normal = texture2D(u_normalMap, v_texCoord * textureRepeat);
normal = normalize(normal * 2.0 - 1.0);

textureRepeat is a constant value which allows tiling of the texture to better fit the water plane. Set it to 2.0 to make the texture repeat twice in both the S and T direction, 12.5 to make it repeat 12.5 times and so on. Before we start calculating any reflection and blend parameters, let's add some distortion to the output.

//distortion offset
vec4 dudv = normal * distortAmount;
    

//refraction sample
vec2 textureCoord = fromClipSpace(v_vertexRefractionPosition) + dudv.rg;
textureCoord = clamp(textureCoord, 0.001, 0.999);

distortAmount reduces the amount of distortion added, as too much can easily ruin the effect, and is typically a small number such as 0.05. The red and green values of dudv are then added to the texture coordinates, offsetting them slightly, before clamping the coordinates within a reasonable range. The refraction texture is the sampled in the normal way with the newly offset coordinates, and the process repeated for the reflection texture. The output should now be a nice wavy distorted image (assuming you're using the normal map texture supplied with the article source. You can use any normal map texture you like).



After the reflection and refraction textures have been sampled, we are now ready to approximate the fresnel value, and use it to blend the textures together. To do this we need the eye position relative to the current vertex, so we can take at dot product of it with the current normal value. The watersample vertex shader needs two new uniform variables

uniform mat4 u_worldMatrix;
uniform vec3 u_cameraPosition;

and a new varying

varying vec3 v_eyePosition;

so that the calculated position can be passed along to the fragment shader. Gameplay provides the worldMatrix and cameraPosition values for us as standard, and we can auto bind these in the material file the same way as we did the normal map, which saves having to modify the project code:

u_worldMatrix = WORLD_MATRIX
u_cameraPosition = CAMERA_WORLD_POSITION

Then, in the main() function of the vertex shader, we can calculate the eye position

v_eyePosition = u_cameraPosition - (u_worldMatrix * a_position).xyz;

With the eye position available in the fragment shader we can begin to use it to calculate the fresnel value. Before we can use it, however, the eye position needs to be converted to the tangent space coordinates used by the normal map (or we could just use an object space normal texture - but that would upset the distortion factor). Due to the fact the water plane is fixed horizontally we can use a set of constant vectors to represent the plane's normal, tangent and bitangent vectors (if the plane was oriented in any other way we'd probably have to pass these values in either as an attribute or a uniform value), and use them to move the eye position into tangent space

const vec4 tangent = vec4(1.0, 0.0, 0.0, 0.0);
const vec4 viewNormal = vec4(0.0, 1.0, 0.0, 0.0);
const vec4 bitangent = vec4(0.0, 0.0, 1.0, 0.0);


vec4 viewDir = normalize(vec4(v_eyePosition, 1.0));
vec4 viewTanSpace = normalize(vec4(dot(viewDir, tangent), dot(viewDir, bitangent), dot(viewDir, viewNormal), 1.0));

then create a reflected vector of the view and dot it with the normal to get our approximated fresnel term

vec4 viewReflection = normalize(reflect(-1.0 * viewTanSpace, normal));
float fresnel = dot(normal, viewReflection);

we now have our value to feed into the mix function:

gl_FragColor = mix(reflectionColour, refractionColour, fresnel);

Load up the scene and you should see the water really beginning to take shape. Moving around the scene you'll notice the blending of the reflection and refraction map change to match your view. One thing is still not right though, and that is the fact that the water is still apparently frozen. We can change this with a simple new uniform in the fragment shader

uniform float u_time;

This is simply going to be a floating point value which increases over time. In the article's source folder there is a small utility class called Timer, which abstracts the Gameplay clock, although you can use getGameTime() directly if you prefer. Create a private const function to return its value, preferably divided by some amount (else the animation will run waaay too fast), and use it to bind the elapsed time to the new shader uniform. In the fragment shader add the time to the coordinates of the normal map look up.

vec4 normal = texture2D(u_normalMap, v_texCoord * textureRepeat + u_time);

This will have the effect of offsetting the normal map texture, scrolling it across the surface of the plane, and creating a simple yet pleasing animation. If you get odd stretched lines across the surface make sure to check that the sampler settings in your water material have wrapS and wrapT set to repeat.

That pretty much sums up what I set out to describe in this article, but there is plenty more which could be added to improve the effect. For instance no lighting is taken into account in the fragment shader, which, once added, could also be used in conjunction with the normal map to calculate specular highlights on the surface of the water. The water also looks very clean too. It is entirely possible to calculate the depth of the water and blend it with a colour so that it appears darker and murkier the the deeper you go.



Here's a short video of the final version of the project, and the water effect running on my Moto G with Android 4.4.2


References:
Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Previous Parts:
Part One
Part Two
Part Three


Thursday, 11 September 2014

Water in OpenGL and GLES 2.0: Part3 - Reflection

Continuing from the previous part of this article on creating a water effect in Gameplay3D, in this part we'll cover creating reflections on the surface of the water. It is important that you have read and completed part two, and that you have the refraction buffer drawing, previewed, and projected on to the water plane. This is because before we can continue we need to replicate the refraction buffer with a new member *m_reflectionBuffer, as well a new sprite batch *m_reflectBatch to draw the preview. Add these to the project, initialise them in the initialise() function, release and delete them in the finalise() function, and update the render() function so that the scene is drawn to the new reflection buffer, and the reflection buffer preview is drawn next to the preview window of the refraction buffer - all in the same way as the refraction buffer.
    Once you have the scene set up we can start to modify the process, so that instead of getting a duplicate of the refraction buffer, we actually get a reflection. Firstly modify the clip plane settings in the render function right before drawing the reflection buffer:

m_clipPlane.y = 1.f;
m_clipPlane.w = -m_waterHeight;

By inverting the normal direction and the plane height the plane now faces the opposite direction. When you compile and load the scene you should see in the preview window that the grass is kept, and that the bottom of the pond is clipped away instead. This is because we want to reflect the scene as it appears above the water. Next we need to consider how to invert the image vertically, as a reflection would appear in the water. A reflection isn't just the image as seen from the camera, only upside down, however. What we see is, in fact, what would be seen by a camera below the water plane, targeted at the same point as the scene's camera:


If the scene's main camera is camera A, then the reflection it sees is the same as if the scene were viewed from camera B. If you've been reading the reference articles linked at the bottom of these posts, you'll see each one offers its own implementation of this camera set up. If we were using raw OpenGL the preferable way would be to use a reflection matrix but, as this article is based around the Gameplay framework, the option is not particularly viable. An alternative would be to scale the entire scene in the Y axis by -1 during the reflection pass, which is possible, but has the drawback of not easily being able to store the WorldViewProjection matrix (more on this shortly). Finally we could create a second camera in place of camera B on the diagram, by taking the forward and right vectors of the scene camera, computing the cross product of the two vectors to find the up vector, and using them to compute a new LookAt matrix each frame, to orient camera B in the right direction. The latter seems a little heavy to do each frame so I settled on (perhaps controversially) creating a second camera in the scene, and having it follow the movements of the main camera, only mirrored about the water plane. In the initialise() function directly after creating the scene camera:

//add a second camera do draw the reflections
m_reflectCamNode = gp::Node::create("reflectCamNode");
m_reflectCamNode->setTranslation(camStartPosition.x, -camStartPosition.y, camStartPosition.z);
 

camPitchNode = gp::Node::create();
gp::Matrix::createLookAt(m_reflectCamNode->getTranslation(), gp::Vector3::zero(), gp::Vector3::unitY(), &m);
camPitchNode->rotate(m);
m_reflectCamNode->addChild(camPitchNode);
 

camera = gp::Camera::createPerspective(45.f, gp::Game::getInstance()->getAspectRatio(), 0.1f, 150.f);
camPitchNode->setCamera(camera);
SAFE_RELEASE(camera);
SAFE_RELEASE(camPitchNode);

This is pretty much a duplicate of the scene camera creation code although, crucially, the Y component of the start point vector is negated, so the the initial LookAt matrix is a reflection of that of the scene camera. Next we need to modify the mouse move event, so that the new camera's pitch movement is inverse to that of the scene's main camera, while the yaw remains the same.

m_reflectCamNode->rotateY(xMovement);
m_reflectCamNode->getFirstChild()->rotateX(-yMovement);

And, of course, we need to make sure that it follows the translation of the main camera in the update() function

auto position = m_cameraNode->getTranslation();
position.y = -position.y + m_waterHeight * 2.f;
m_reflectCamNode->setTranslation(position);

while making sure the Y position is reflected about the water plane by negating it, and adding the plane height multiplied by two. Now when drawing the scene to the reflection buffer we can switch cameras by making a copy of the active scene camera, making the reflection camera the new active scene camera, rendering the reflected scene and then restoring the original camera before drawing the final pass.
   The preview window for the reflection buffer displays the edges of the pond, as seen from below from the view of the reflection camera, and is ready to be projected onto the water plane. In part two of the article the last step was to project the refraction buffer on to the plane via the water shader. We need to do the same thing again, only this time we are using a different camera, so we need to use the corresponding WorldViewProjection matrix to generate the texture coordinates. While the reflection camera is active we can store the plane's WorldViewProjection matrix in a member variable

m_worldViewProjectionReflection = m_scene->findNode("Water")->getWorldViewProjectionMatrix();

This is important that we do this here because *the matrix is only valid while the reflection camera is active*, and is why we store it in a member variable. Adding a private function which returns a const reference to m_worldViewProjectionReflection will then allow us to bind it to the water shader in the same way as the other shader-bound variables which, hopefully, you should now be familiar with. All that's left to do, then, is modify watersample.vert and watersample.frag with uniforms for the new projection matrix and the reflection buffer sampler, in the same way in which we added the refraction buffer previously.

In the vertex shader:

uniform mat4 u_worldViewProjectionReflectionMatrix;
varying vec4 v_vertexReflectionPosition;

and

v_vertexReflectionPosition = u_worldViewProjectionReflectionMatrix * a_position;

and in the fragment shader we sample the reflection texture with the new coordinates

textureCoord = fromClipSpace(v_vertexReflectionPosition);    
vec4 reflectionColour = texture2D(u_reflectionTexture, textureCoord);

To see the result we can assign reflectionColour directly to gl_FragColor. Notice how, because we projected the texture as if it were from the reflection camera, the image is automatically flipped! You should have something which looks like a flat, glossy mirror, albeit with some slight artifacting due to the lower resolution render buffer.


Now we are most of the way there. The only things left to do are to blend the reflection and refraction maps in the watersample fragment shader, and add some animated waves to make the scene look a bit more natural. I will cover that in the next, and final, part of this article.

Part Four

References:
Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Previous Parts:
Part One
Part Two

Wednesday, 10 September 2014

Water in OpenGL and GLES 2.0: Part 2 - Refraction

In the first part of this article I outlined the technique for creating a water effect in OpenGL / GLES which is cheap enough to run on a range of mobile devices. This part of the article looks at the first step toward implementing the effect: rendering the refraction texture.
    This requires everything in the scene below the water level to be rendered to a frame buffer - an off screen render target - whose texture can then be used to feed a sampler uniform in the water material's fragment shader. The fragment shader can then project this texture on to the plane in the scene, while also blending and distorting it to create the illusion of water. I'll assume that you have read the first part of the article, and have the example scene set up in your editor of choice, and that you also have the accompanying source code from github.

First let's set up a render buffer to draw the refraction data to, and a sprite batch so that we can preview the buffer's contents on screen. Gameplay provides a FrameBuffer class, and a SpriteBatch class, which we'll use for the task. Add two member variables *m_refractBuffer and *m_refractBatch, and then initialise them in the initialise() function:

m_refractBuffer = gp::FrameBuffer::create("refractBuffer", bufferSize, bufferSize);
    

auto refractDepthTarget = gp::DepthStencilTarget::create("refractDepth", gp::DepthStencilTarget::DEPTH, bufferSize, bufferSize);
m_refractBuffer->setDepthStencilTarget(refractDepthTarget);
SAFE_RELEASE(refractDepthTarget);

m_refractBatch = gp::SpriteBatch::create(m_refractBuffer->getRenderTarget()->getTexture());

Don't forget to release the frame buffer with SAFE_RELEASE in the finalise()function, as well as delete the sprite batch with SAFE_DELETE. Notice how we can use the texture member of the frame buffer to create the sprite batch, which is useful for previewing the buffer's contents. If you've been studying the article's source code, you'll have noticed that the frame buffer's size is not in fact the same as that of the main window, nor is it even the same aspect ratio. I deliberately chose 512 x 512 as the buffer size, as many mobile devices only support power of two texture dimensions, which is important. Having experimented on a few android devices, I've found that there's a good chance the water will just appear as a black, empty hole when using textures or frame buffers with non-power of two dimensions. On the other hand you can probably use any resolution you like if you're targeting modern desktop hardware, which has the advantage that the quality of the effect will be much greater if you use a buffer resolution which matches the resolution of the render window.
    Once the buffer is set up we need to draw the scene to it. In the render() function, before the call to clear() add:

//update the refract buffer
auto defaultBuffer = m_refractBuffer->bind();
auto defaultViewport = getViewport();
setViewport(gp::Rectangle(bufferSize, bufferSize));
 

clear(CLEAR_COLOR_DEPTH, clearColour, 1.0f, 0);
m_scene->visit(this, &WaterSample::m_drawScene, false);

Calling bind() on the buffer calls the internal OpenGL bind function, meaning that any drawing we do now will happen on the refraction frame buffer, because it is now the currently bound object. We also store the result from bind() as it returns a pointer to the previously active buffer (the main window) which we need so we can restore it immediately after updating the refraction buffer. We also store the previous viewport for the same reason.
    Because this is the refraction pass of the effect, we don't actually want the water plane rendered on the refraction buffer. Adding a boolean parameter to the m_drawScene() function allows us to decide whether or not the water plane is included during the scene visit. Now we can clear() the buffer and visit() the scene, so that the scene is rendered to the refraction buffer, remembering to pass false as a parameter to visit(). When this is done, restore the previous buffer by calling its bind() function, and restore the viewport. Then we can draw the scene to the main window normally, including drawing the water plane by passing true to the scene's visit() function.
    After drawing the scene, we can use the sprite batch to draw a small preview of the refraction buffer:

if (m_showBuffers)
{
    m_refractBatch->start();
    m_refractBatch->draw(gp::Vector3(0.f, 4.f, 0.f), gp::Rectangle(bufferSize, bufferSize), gp::Vector2(426.f, 240.f));
    m_refractBatch->finish();

}

The parameters to the sprite batch draw() function allow us to define the source and destination rectangles of the refraction buffer's texture, as well as the scale. This is fortunate because it means that, even though the frame buffer has a resolution of 512 x 512, we can size and stretch the image to anything we like, as well as place it in the top left hand corner of the screen. m_showBuffers is a boolean member which can be toggled via keyboard input, providing the option to hide the preview. In the example source code I've chosen to use the space bar. Compile and run the program and you should see the now familiar scene, with a slightly smaller version drawn in the corner:



Now that the rendering and preview window is set up, it's time to modify the shader used to render textured part of the scene, so that we can clip everything above the water level. GLES doesn't support glClipPlane, but we can still clip the output in the fragment shader using the equation Ax + By + Cz + D = 0 to represent the plane. To get the height of the water plane, find the Water node using the scene's findNode() function, right after loading the scene in initialise(). The height is the node's Y translation, which we can store in a member variable m_waterHeight. Next add a four component vector (Vector4) member m_clipPlane. This will be used to store the plane description according to our equation, and pass it to the fragment shader. We need to use a member variable here, as the plane needs to be set to zero when rendering the main scene so that the main scene doesn't get clipped (and, later, reversed when rendering the reflection pass). This way we can bind the vector to a uniform in the shader via a private member function which simply returns a const reference to m_clipPlane.
    In the render function, before drawing the refraction pass add:

m_clipPlane.y = -1.f;
m_clipPlane.w = m_waterHeight;

This describes our water plane as facing downwards (the first three components represent a normal vector pointing from the face of the plane) with the fourth component describing the height in world units. While the vector is set to this value clipping will be performed by the shader using this plane. As we don't want to clip any of the main scene reset the plane

m_clipPlane = gp::Vector4::zero();

before drawing it.
    You won't see any clipping yet, however, as we need to modify the default Textured vertex and fragment shaders provided by Gameplay. In order to make plane clipping optional I took advantage of the define system Gameplay uses, so that adding CLIP_PLANE to the defines line of the watersample.material file enables clipping on meshes which use the Textured material. In the vertex shader we need to add two new uniforms

#if defined (CLIP_PLANE)
uniform mat4 u_worldMatrix;
uniform vec4 u_clipPlane;
#endif


so we can pass the clip plane into the shader. We also need a new varying so that the calculated clip distance can be passed to the fragment shader:

#if defined(CLIP_PLANE)
varying float v_clipDistance;
#endif


and, finally, in the main function:

#if defined(CLIP_PLANE)
v_clipDistance = dot(u_worldMatrix * position, u_clipPlane);
#endif

Taking the dot product of the current vertex in world space with the clip plane returns distance from the clip plane to the current fragment. This means that after adding the corresponding varying variable to the fragment shader it only takes a quick comparison to check whether or not to discard the current fragment:

#if defined(CLIP_PLANE)
if(v_clipDistance < 0.0) discard;

#endif

The modified shader will now use the plane described in u_clipPlane to decide where the current fragment lies, and discard it if necessary. This test is done right at the beginning of the fragment shader's main function, as there's no point doing any other processing on a fragment if it is to be discarded. Finally we need to bind the value of m_clipPlane in the project's code to the shader's u_clipPlane uniform. In the initialise() function after loading the scene, find the Ground node's model's material

auto groundMaterial = m_scene->findNode("Ground")->getModel()->getMaterial();

and then bind the function we created earlier to the the u_clipPlane parameter like so:

groundMaterial->getParameter("u_clipPlane")->bindValue(this, &WaterSample::m_getClipPlane);

Phew. That's a lot to get your head around in one go. If you're a bit lost here's a brief rundown of what we did:

  • Stored the water plane's height by retrieving it from the scene node
  • Created a four component vector m_clipPlane to describe the clipping plane
  • Added a private function m_getClipPlane() which returns a const reference to m_clipPlane
  • Updated the clip plane's parameters during rendering
  • Modified the Textured vertex and fragment shaders to discard fragments based on the plane's value
  • Bound m_clipPlane's value to the shader in initialise() via the new function m_getClipPlane()

If you're still a little lost study the article's source code carefully. We'll be using most of this again later, when rendering the reflection buffer. If all went well compiling and running the project should present you with something similar to this:


    Now that the buffer is ready, we want to draw it on the water plane itself, which we can do by projecting it via the camera's WorldViewProjection matrix. In the article's source folder there are two shaders: watersample.frag and watersample.vert. The water material definition in watersample.material has also been updated to use these new shaders.
    At the moment the watersample vertex shader is pretty simple. It takes the incoming vertex position, multiplies it by the current WorldViewProjection matrix and assigns it to gl_Position, which is standard GLSL. It also assigns the value to a varying, v_vertexRefractionPosition, so that it is passed to the fragment shader. The fragment shader has a single sampler uniform to which we bind the texture of the refraction frame buffer in initialise(), right below were we bound m_clipPlane to the textured material:

auto waterMaterial = m_scene->findNode("Water")->getModel()->getMaterial();
auto refractSampler = gp::Texture::Sampler::create(m_refractBuffer->getRenderTarget()->getTexture());
waterMaterial->getParameter("u_refractionTexture")->setSampler(refractSampler);
SAFE_RELEASE(refractSampler);

The incoming vertex coordinates are in clip space with a range of -1.0 - 1.0, and need to be converted into normalised device coordinates (0.0 - 1.0) first, which is performed by the function fromClipSpace() in the fragment shader. The coordinates can now be used to sample the texture in the normal way, via texture2D(). For now we just output the result directly to gl_FragColor, which results in this:



The texture from the refraction frame buffer is rendered onto the water plane as if it were projected from the camera (if you've done shadow mapping before then you've done projected mapping. This is the same principle, only we're projecting the image from the camera, rather than a light source). This doesn't look very impressive, possibly even a little worse as the lower resolution of the refraction buffer has blurred the output slightly, but we've made an important step towards the water effect. More importantly we've updated the Textured shader, and begun a new water shader, learning about texture projection along the way. These techniques are integral and will be built upon in the next part; rendering the reflection pass.

Part Three

References:
Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Previous Parts:
Part One

Tuesday, 9 September 2014

Water in OpenGL and GLES 2.0: Part 1 - Introduction

In an effort to bolster content on the blog I've been working on a four part article describing  a water effect rendered in OpenGL, aimed mainly at games. There are a few sources on this topic already, to many of which I referred while writing the article, but I hope this particular instance will stand out in a couple of ways. Firstly this water effect is compatible with GLES 2.0, meaning it runs on a fairly wide selection of mobile devices. Secondly I wrote this specifically with the Gameplay3D framework in mind, although the techniques should be portable to other software / libraries. For those of you unaware, Gameplay3D is a cross-platform framework written in C++ with the aim of supporting the creation of games, particularly on mobile devices. It's quite mature now, actively developed by a group of professional developers at Blackberry, and is open source under the Apache 2.0 license. The main site is here, and the repository can be found on github. The 'next' branch is, in my opinion, stable enough to use and to take advantage of all the bug fixes over the current 2.0 release.
    In this first part I won't go into setting up a new Gameplay project, as everything you need to know can be found on the official wiki. Instead I'll skip ahead to setting up a 3D scene within which the water effect can be developed, before outlining the general theory behind what this article is trying to achieve. I have made the full source code for the project (bar the Gameplay library itself) available on github, which is also linked at the bottom of the page. Actually implementing the effect is covered over the next three parts.

 Assuming you have a new Gameplay project set up, you'll need to grab the article's source code from the repository. If you're developing on Windows Gameplay uses Visual Studio 2013 by default, which offers pretty decent C++11 support. I have chosen to take advantage of this in the article source code, so if you plan to compile any of the source on another platform, you will need to make sure C++11 is available. I have tested GCC 4.8 on Linux, and clang on Android and have found that they both work. If you want to use OSX or iOS then you'll have to experiment yourself, but I am led to believe C++11 is supported. Once your environment is set up and you have an empty template project, you can either replace it with the WaterSample.h and WaterSample.cpp from the source, or start developing your own alongside the article, using the source as a reference. I'll not be going through all of the code line by line, so if something appears to be missing from the explanation it is worth looking at the article source code. The repository also contains all the assets and resources used in this article, if you don't want to create your own.

Gameplay uses manual reference counting of shared objects so, in order to prevent memory leaks, it is important to make sure all of the objects in the code are allocated and released properly. The default Gameplay project for Visual Studio includes a DebugMem build which I highly recommend using. If any resources are not properly freed on program exit then, when using this build configuration, any reference counted objects still in memory will be reported in the debug window. The general rule of thumb is that any objects created with a Class::create() function must be freed or have their reference count updated with SAFE_DELETE or SAFE_RELEASE. Any pointers retrieved via find or get functions do not need to be updated, however.
    Probably the most crucial functions in any Game derived Gameplay class are the initialise() and finalise() functions. These are where resources which live as members of the class should be created and destroyed, and is where we'll load our scene. To set up the article scene make sure that the pond.gpb and water_sample.png files from the article source are copied to somewhere in the res/ folder of your project's working directory. The pond.gpb is an optimised binary file containing two nodes making up the scene, and water_sample.png is used to texture them. You also need to make sure to copy the watersample.scene file and watersample.material file to the res/ directory as these are used to tell the framework how to load the nodes from the binary file, and how to apply the texture. You also need to copy the default set of shader files provided with Gameplay (if the setup hasn't already), or edit the material file to point to the correct directory. You can read more about configuration files on the Gameplay wiki.

The scene lives as a member of the class, m_scene and is loaded in the initialise() function with

gp::Scene::load(path/to/watersample.scene);

To avoid leaks remember to add

SAFE_RELEASE(m_scene);

to finalise(). Get used to this, as any class members created in initialise() will need to be released in finalise(). Objects local to initialise(), however, should be released with SAFE_RELEASE as soon as their usefulness is met. To be able to view the scene we will need a camera node with a camera attached to it. This camera will also allow navigation within the scene. The camera and its nodes are created in initialise():

m_cameraNode = gp::Node::create("cameraNode");
m_cameraNode->setTranslation(camStartPosition);

auto camPitchNode = gp::Node::create();
gp::Matrix m;
gp::Matrix::createLookAt(m_cameraNode->getTranslation(), gp::Vector3::zero(), gp::Vector3::unitY(), &m);
camPitchNode->rotate(m);
m_cameraNode->addChild(camPitchNode);
m_scene->addNode(m_cameraNode);

auto camera = gp::Camera::createPerspective(45.f, gp::Game::getInstance()->getAspectRatio(), 0.1f, 150.f);
camPitchNode->setCamera(camera);
m_scene->setActiveCamera(camera);
SAFE_RELEASE(camera);
SAFE_RELEASE(camPitchNode);


Notice m_cameraNode is a member variable, and so will need to be released in finalise(). camera and camPitchNode, however, only exist locally and so are released as soon as we are done modifying them. The camera makes use of two nodes; m_cameraNode allows the camera to be yawed, that is rotated around the Y axis, as well as be translated in the scene. Its child node camPitchNode is used to pitch the camera up and down.
    To be able to actually see the scene on screen we need to add a call to clear() to the render() function, before visiting the scene with

m_scene->visit(this, &WaterSample::m_drawScene);

This function takes a reference to the m_drawScene() function, which it then calls on each node in the scene, the implementation of which is taken directly from the Gameplay wiki. With this added we should have the bare minimum to compile and run the project, which should display something like this (the view angle may differ depending on how your camera was initialised):

The scene contains a textured mesh, which is attached to one of the scene nodes, and a flat blue plane attached to a second node, which will eventually become the water. Unfortunately we can't yet move the camera, which would be nice to have when we look at the water later on, so let's add that first.
    The Game class (from which our project is derived) also has a set of virtual functions used to handle input events. We'll override two of them, mouseEvent() and keyEvent(), and use them to rotate the camera and to modify a bitmask, m_inputMask. Then, in the update() function, we use the state of this bitmask to apply a force to m_cameraNode, moving it around the scene.

First event handling:
bool WaterSample::mouseEvent(gp::Mouse::MouseEvent evt, int x, int y, int wheelDelta)
{
    switch (evt)
    {
    case gp::Mouse::MOUSE_MOVE:
    {
        auto xMovement = MATH_DEG_TO_RAD(-x * mouseSpeed);
        auto yMovement = MATH_DEG_TO_RAD(-y * mouseSpeed);

        m_cameraNode->rotateY(xMovement);
        m_cameraNode->getFirstChild()->rotateX(yMovement);
    }
        return true;
    case gp::Mouse::MOUSE_PRESS_LEFT_BUTTON:
        m_inputMask |= Button::Forward;
        return true;
    case gp::Mouse::MOUSE_RELEASE_LEFT_BUTTON:
        m_inputMask &= ~Button::Forward;
        return true;
    case gp::Mouse::MOUSE_PRESS_RIGHT_BUTTON:
        m_inputMask |= Button::Back;
        return true;
    case gp::Mouse::MOUSE_RELEASE_RIGHT_BUTTON:
        m_inputMask &= ~Button::Back;
        return true;
    default: return false;
    }

    return false;
}


void WaterSample::keyEvent(gp::Keyboard::KeyEvent evt, int key)
{
    if (evt == gp::Keyboard::KEY_PRESS)
    {
        switch (key)
        {
        case gp::Keyboard::KEY_ESCAPE:
            exit();
            break;
        case gp::Keyboard::KEY_W:
        case gp::Keyboard::KEY_UP_ARROW:
            m_inputMask |= Button::Forward;
            break;
        case gp::Keyboard::KEY_S:
        case gp::Keyboard::KEY_DOWN_ARROW:
            m_inputMask |= Button::Back;
            break;
        case gp::Keyboard::KEY_A:
        case gp::Keyboard::KEY_LEFT_ARROW:
            m_inputMask |= Button::Left;
            break;
        case gp::Keyboard::KEY_D:
        case gp::Keyboard::KEY_RIGHT_ARROW:
            m_inputMask |= Button::Right;
            break;
        }
    }
    else if (evt == gp::Keyboard::KEY_RELEASE)
    {
        switch (key)
        {
        case gp::Keyboard::KEY_W:
        case gp::Keyboard::KEY_UP_ARROW:
            m_inputMask &= ~Button::Forward;
            break;
        case gp::Keyboard::KEY_S:
        case gp::Keyboard::KEY_DOWN_ARROW:
            m_inputMask &= ~Button::Back;
            break;
        case gp::Keyboard::KEY_A:
        case gp::Keyboard::KEY_LEFT_ARROW:
            m_inputMask &= ~Button::Left;
            break;
        case gp::Keyboard::KEY_D:
        case gp::Keyboard::KEY_RIGHT_ARROW:
            m_inputMask &= ~Button::Right;
            break;
        }
    }
}


And then the update() function:
void WaterSample::update(float dt)
{
    //move the camera by applying a force
    gp::Vector3 force;
    if (m_inputMask & Button::Forward)
        force += m_cameraNode->getFirstChild()->getForwardVectorWorld();
    if (m_inputMask & Button::Back)
        force -= m_cameraNode->getFirstChild()->getForwardVectorWorld();
    if (m_inputMask & Button::Left)
        force += m_cameraNode->getRightVectorWorld();
    if (m_inputMask & Button::Right)
        force -= m_cameraNode->getRightVectorWorld();

    if (force.lengthSquared() > 1.f) force.normalize();

    m_cameraAcceleration += force / mass;
    m_cameraAcceleration *= friction;
    if (m_cameraAcceleration.lengthSquared() < 0.01f)
        m_cameraAcceleration = gp::Vector3::zero();

    m_cameraNode->translate(m_cameraAcceleration * camSpeed * (dt / 1000.f));

}

Using the forward and right vectors of the camera nodes we can calculate a direction vector in world coordinates, which is then applied as a force using Newton's second law of motion: f = ma, or force = mass * acceleration. The constant values mass and friction can be found at the top of the .cpp file in an anonymous namespace, where I prefer to group any constant values. Compile and run the scene and you should find that you can now move the camera similarly to a first person shooter, looking around with the mouse and moving using either the cursor keys or W, A, S and D.

 So now that we have a scene set up and ready to get wet, let's take a moment to look at the theory behind applying the water effect. Firstly, this is purely a visual effect, no physics are involved, and, secondly, this is meant to supplement the atmosphere of a game, particularly with mobile development in mind, so there'll be no interaction with the water. It is a relatively cheap effect, and can be taken much further beyond this article as it provides the basis for more advanced effects, such as those which are seen in game engines like Source.

The effect is a multi-pass effect, composed of three scene renders per frame, two of which are done to off-screen buffers. The first render is used to create the refraction of the water. Everything in the scene above the water line is clipped and the remaining fragments rendered to a frame buffer.

The second render is used to create the reflection. The scene is this time clipped below the water line, and also inverted vertically.

 Finally the third pass renders the two images to the screen, blending them via a special term calculated from the current view position, and distorted using a normal map to give the appearance of waves.

While this theory is pretty general, the implementation varies between platforms, languages and even libraries. In the second part of this article I'll explain how to set up a frame buffer in Gameplay, and use it to render the refraction pass. I'll also explain about how the image is projected onto the water plane of the scene via GLES compatible shaders. In part three I'll extend this technique to render the reflection pass, and in the final part cover blending the passes, as well as improving the overall effect with some basic animation.

Part Two

References:
Eric Pacelli
Lauris Kaplinski
Riemer's XNA page

Source Code:
Github page

Sunday, 3 August 2014

Z ordering of sprites in Tiled maps

Wow it's been a while since I last posted, so this is long overdue. The past few months have flown by while I work on an Android project powered by Gameplay3d, which is a great (although, some may argue, incomplete - definitely worth checking out the 'next' branch) cross-platform framework, that has allowed me to really get get into mobile development. Oh and there was ChufJS, an incomplete and buggy scene graph written in Javascript/WebGL, which fun to make nonetheless. This post is about neither of those, though, what I'd like to do is address a Tiled map question which is often asked. My Tiled map loader for SFML, while producing varied results, is by far my most popular project, so it'd be remiss of me to not try and support it where I can. I should point out, of course, that these are by no means definitive answers and are hopefully flexible enough to be applied to tile maps in general, not just to the SFML Tiled map loader.
    So what's the deal? I've been asked on more than one occasion about the best way to have a player or other sprites move around on an RPG style map, and be able to walk both behind and in front of the scenery (aka z-ordering, or z-depth ordering). I can think of two options, one slightly more convoluted than the other, which I'll try to explain. Any code I use will be pseudo-code, although written with C++ in mind (the second example relies on the STL).

    The first example is based on what many professional games do, and actually relies on an optical illusion created with a carefully designed tile set. The map needs to be set out in three layers (although you could probably combine the bottom two) comprising of the background:


the scenery of which you want to walk in front:



and finally the top layer, the scenery behind which you'd you like the player to appear:



The grey grid is visible only in Tiled, which I left in the screenshots to help illustrate the point. Notice the roof tops and tree tops are quite small. This is key to the effect as they need to be smaller (or shorter, at least) than the player sprite:



(by the way, I'm using a Pokemon tile set sent to me quite a long time back, and I don't have the original source to give credit - if anyone knows please tell me so I can update this post appropriately.)

Next comes the important part: setting up the collision detection. Here I have drawn a set of boxes on a Tiled object layer and marked them in red for better visibility. I made sure the option to snap the objects to the Tiled grid was switched on. The SFML map loader has built in functions for automatically creating solid objects from object layers, which you can read about here. The player has two collision points placed by its feet (also in red):

 



Drawing the sprite above the building layer means that because the sprite's feet will never enter the collision box, the body will be able to pass over, or in front of any solid areas when approaching from below. On the other hand, if the sprite layer is behind the roof top layer, the sprite will go behind the buildings, but only as far as the collision points on the feet allow - preventing the illusion being broken by the player passing over the body of the house from the top, and also having the benefit of not allowing the sprite to get completely lost behind the house (I can only imagine losing your player on screen being rather frustrating). The end result is something like this:


In my opinion this is probably the most effective and programmatically cheap way of sorting sprite depth, and my personally preferred method.
   There is an option which can be performed in code, however, which I'll briefly touch upon. Firstly I'm assuming the use of C++ here (specifically C++11), although I'm sure other languages have similar features available. The only part of the map drawn in Tiled is the background: all of the detail and player sprites are stored in some sort of container, for argument's sake let's say a std::vector. This is already more computationally expensive because (in SFML at least) we have lost the use of vertex arrays, and using the default SFML sprite class will require an increase in draw calls, and incur a performance hit. This can be reduced a bit by culling all sprites not currently visible on screen, and placing the visible sprites in their own temporary vector.
    Once a list of sprites to be drawn is available, the STL provides sorting algorithms which can be used on the container, coupled with a functor or lambda expression, which will allow you to perform z-depth ordering on the sprites, before drawing each one. In pseudo-code it would look a bit like this:

...

std::vector<sprite*> spriteList = getVisibleSprites();

...

std::sort(spriteList.cbegin(), spriteList.cend(), [](const sprite* s1, const sprite* s2)->bool
    {
        return (s1->getPosition().y > s2->getPostion().y);
    })

...

background.draw();
for(const auto s : spriteList)
    s->draw();

Sprites are culled and placed into a container, sorted via a lambda expression which determines the sprites vertical position (if a sprite is lower on screen it should be drawn over the top of other sprites), and then each sprite is drawn over the background. The performance hit is usually negligible on modern hardware, but may become an issue on mobile devices.

    That pretty much sums up the two techniques, both of which I have employed in the past, although there are plenty of other techniques available via your favourite search engine. Hopefully this answers, to some extent at least, any questions people may have had about z-depth ordering in Tiled maps.

Thursday, 20 March 2014

AVR

It's been a while since I did what I call 'pioneering' AVR development, something new and interesting, something not within the realms of personal day to day work. Thankfully I've picked up a project interfacing an ATMega8 with a WIZnet W5100, which has me all enthusiastic about AVRs again. This post isn't really about that, however, it's more because I've decided to give my collection of AVR code a review, particularly as I'm one of those perverse types who insists on using C++ with Atmel Studio. The thing I've found about C++ is that it's much easier for me (personally) to organise my code into something clean and reusuable this way, so I've been modularising the most often used features of the ATMega8 (although it works with other MCUs such as the 328 and 88 with barely a modification) and building up a library I can use to piece together new projects quickly. Because there doesn't seem to be many resources for C++ on AVRs I've uploaded what I have to Github. There's not a huge amount yet, but hopefully it'll expand over time. Currently there are some useful utilities for accessing the ADC channels, creating timers, and using the UART / serial communications - including the ability to print debug data via RS232. The long term plan is to make it as widely compatible and as easily configurable as I can, but for now it will probably just be growing as and when other projects provide opportunity.

Saturday, 8 March 2014

Space Racers Binaries

That's Right!

Windows
Linux

I finally pulled my finger out and uploaded the binaries for Space Racer. The linux version worked for me on Mint 15 32 bit (which is what I compiled it on) - but I have a rocky history with linux so I have no idea how well it'll work for anyone else. Some people have reported a bug where the game loads to a white screen on Windows, which I've narrowed down to the AMD 13.12 graphics card drivers. Reverting to 13.9 or updating to the beta drivers seems to fix this. I'm not going to release the source for Space Racers, but I have uploaded the base framework code to github for anyone interested. This is basically what I start with whenever I create a new project, and where any new features I may create end up, if I think they are features I might want to use again in the future. There's no documentation particularly, as I've only released it for anyone who is curious. If you want a decent framework to start your own project I'm sure there are plenty of much better alternatives.