in biogrid, devblog, graphics, shaders

devblog 2 – light and shadows

Offloading work to GPU

The pixel art based look that I’m aiming for doesn’t really call for much post-processing and heavy shader-based effects, so the GPU has thus far been nearly unused. It’d be a shame to waste all that massive parallel processing power, so why not do something with it?

Light and shadow calculations are a good candidate for handing out to a GPU – it’s a truly parallel problem, something that GPUs excel at. Also, the light doesn’t affect the world state as a whole. Unless we want it to, of course – one could easily imagine light level affecting the growth of plants for example. But that’s a feature for another day.

So, essentially we can solve this problem on a GPU and forget about it – we don’t need to feed the lighting information back to CPU (at least not right now).

The terrain is rendered into a 512×512 bitmap every frame – RGB channels contain the terrain color as expected, with the terrain height encoded into the alpha channel. This bitmap is plugged into Unity’s shader system, so we can do whatever additional GPU processing we desire before drawing it on screen or on a quad.

Normal mapping

The first task should be implementing an actual shading model for the terrain. In the real world, objects hit by light aren’t either in shadow or fully lit but fade smoothly between the two states. Right now, we can’t simulate any subtler lighting details, because there’s no concept of surface normals yet – as far as the rendering engine is concerned, the world is totally flat with no slopes. And to be honest, it really is flat – it’s displayed on a single quad after all.


So, we need to fake the surface normals somehow to give the impression of a smoothly varying height field, not a pile of squared Minecraft blocks. The simplest way to do this, would be normal mapping – perturbing the surface normal according to a specially prepared bitmap. As the terrain itself is using a standard Unity surface shader, all we need to do is to fill the normal output slot in the surface shader with something.

We could prepare a normal map via code and feed it into a shader as a separate bitmap, but that’s CPU time we could be spending elsewhere, so it’d be better to just synthesize what we need on the GPU – after all, we already have the heightmap.

The simplest way is to use a Sobel operator – a kernel-based method usually used in edge detection algorithms. Leaving the comprehension of the underlying math as an exercise to the reader, we essentially compute a gradient of the heightfield’s intensity change by convolving several pixels. I’ve used the shader code provided by apple_motion on Unity forums:

This results in this normalmap, emphasizing the edges of the “terraces” of the terrain, depending on the angle of the light.

I’ve increased the strength of the map on the left for visibility, the actual normalmap is a lot more subtle and genrally more “blue”.

While the above code adds some much-needed definition, it’s not enough. For one, it only affects the terrain itself, not the sprites drawn on top of it. And we can’t use this method to cast true shadows either…

 

Raytracing Shadows

So, how about drawing another partially transparent quad on top of the terrain and sprites that’d contain only the light information? This would affect the sprites and terrain below with no additional work required, alpha blending will handle this for us.

I created another quad and a new custom fragment shader and passed the same terrain bitmap to it. I don’t want to simulate several lights or any point sources, only the sun. This simplifies the work considerably, as the sun can be considered as a directional light source with no defined position – it can be described with just 2 angles (XY,Z) and its intensity.

As the resolution of the bitmap isn’t terribly large (512×512) it’s possible to just brute-force the shadow generation with a simple raytracing algorithm:

  • For every pixel A, we start traversing the heightmap in the sun’s XY direction, sampling all the pixels underneath (blue line)
  • If we encounter a pixel that has a higher height than A, we might’ve hit an occluder B. (green line)
    • Cast a ray from A towards the sun, using sun’s Z angle. (thin orange line)
    • Sample the ray’s height at B
    • If B‘s height is bigger than ray’s height at this point, A is occluded
  • If A isn’t occluded after n steps towards the light, it’s considered to be unoccluded.

I myself always prefer to see actual code too, so I’ve also created this interactive visualization of the shader.

It’s also possible to soften the shadow – we can reduce the shadow’s contribution to the pixel by the distance we’ve traversed before hitting an occluder. This will make the shadow fade out at a distance, but what about the sharp edge on the sides?

I decided to take several samples and jittered the light’s XY angle slightly on each iteration, averaging the contributions together afterwards – not cheap, but a reasonable approximation of an area light.

So, after applying all this good stuff on the semitransparent quad, I got something like this:

1 shadow sample / 3 shadow samples

While this already is a noticeable improvement, we can do better by adding an ambient occlusion term, using the same technique.

Ambient Occlusion

So, let’s try to fake some ambient occlusion, using pretty much the same algorithm we used for direct light:

  • For every pixel A
    • Do n iterations:
      • Current angle D to test towards is (360/n) * i
      • Start traversing the heightmap in the direction of D, sampling all the pixels underneath, for m steps
      • If we encounter a pixel that has a higher height than A, this direction is occluded
        • Increment the pixel’s occlusion factor O
    • Divide O/(n*m) to get a general occlusion factor for A

What I’m doing here, is looking for occluders in several directions on a circle and summing their contributions. Now, when we blend the direct lighting with the ambient term, we end up with this:

ambient occlusion term toggled off / on

A subtle enough effect, but it helps to bring out various nooks and crannies in the terrain and gives a more cohesive look in my opinion.

Results & Demo

Now it’s time to see the results of all this hard work and check out the terrain with sprite rendering enabled, after several thousand iterations of slow erosion and plant growth:

 

Looks good to me! If you’d like to see this in action, get the public build here:

 

Download “BioGrid 0.0.2 Windows” biogrid_0.0.2.zip – Downloaded 78 times – 13 MB

 

New features since last version:

  • Light and shadows on the terrain
  • Creatures are now slowly eroding the terrain they’re walking on
  • Performance optimizations at higher zoom levels
  • UI controls for toggling rendering and simulation speed
  • Less catastrophic bugs

Controls:

  • Mouse Scroll – Zoom
  • WASD/Arrows – Pan

Behaviour:

Herbivores

  • constantly lose energy
  • move around randomly
  • eat plants in their cell
  • gain energy of the eaten plant
  • reproduce after gaining enough energy
  • die when touching water
  • die when out of energy
  • slowly erode the terrain they’re walking on

Autotrophes

  • constantly gain energy
  • spread to empty neighbouring cell after gaining enough energy
  • don’t spread on sand and high altitudes

What’s happening:

It’s currently a rather simplistic herbivore/autotrophe simulation, but the secondary effects of simulation scale are already (kind of) visible. Certain plants or animals can dominate an area, and rarely, populations on small islands are wiped out. Low-lying areas bordered by beaches and water tend to act as nature preserves – most animals in these regions end up wandering to the ocean before doing any real damage.

While there are 4 visual varieties of both herbivores and plants, and they’ll pass on their looks to their offspring, they are functionally identical.

Write a Comment

Comment