Aldream

Article

Pointillism, depicted

To conclude my series of articles describing the tricks I learned and used for the JS1K Spring-2013 challenge, I will now explain how « Pointillism » (js1k / local) was done, since it received such nice reviews.

I will thus cover the lightweight procedural method used to generate the landscape, and the pointillist-like rendering.

I will assume you've already read my previous articles (Mesh Breeding and especially Painter's Algorithm), since they cover the basic notions of 3D generation and rendering.

$ \newcommand{\hvectwo}[2] { \begin{pmatrix} #1 \!\!\! & , & \!\!\! #2 \end{pmatrix} } \newcommand{\hvecthree}[3] { \begin{pmatrix} #1 \!\!\! & , & \!\!\! #2 & \!\!\! , & \!\!\! #3 \end{pmatrix} } \newcommand{\hvecfour}[4] { \begin{pmatrix} #1 \!\!\! & , & \!\!\! #2 & \!\!\! , & \!\!\! #3 & \!\!\! , & \!\!\! #4 \end{pmatrix} } \newcommand{\arraytwo}[2] { \begin{bmatrix} #1 \!\!\! & , & \!\!\! #2 \end{bmatrix} } $

Land-form

The creation of fractal landscapes[1] is a famous and well-covered subject in Computer Graphics. Using stochastic algorithms emphasizing realism or performance, the aim is to generate a landform (mountains, lakes, plateau,...) and then to give life to it by painting it using Nature's logic (light-green meadow in flat sunny areas, white snow clinging to peaks,...), or by populating it with other 3D objects (trees, buildings,... which can also be procedurally generated btw).

A simple landform (continuous surface without caves or overhangs) can be represented by a 2D heightmap, a matrix containing the elevation of sampled points. If we get such a matrix, it is thus possible to convert it into a 3D landscape, by using its indices as $x$ and $y$ coordinates for our vertices, and the elevation as $z$.

A method to generate such heightmap must respect some criteria. It must be stochastic, ie « pseudo-random » (random enough to allow a wide range of landforms, but predictable enough to give some controls over it), exhibit fractal behavior (covering both the global molding and the detailed surface), and be coherent, ie without any discontinuity (unless we want special topographic features like cliffs).

Some noise functions meet those conditions. The Perlin Noise[2] is especially famous and widely used, being quite easy to implement while giving smooth results.

Alas, « easy to implement » doesn't always mean « short ». Even though it can be condensed in few lines, it was still too long for an 1K demo. I thus opted for another well-known method: the Midpoint-Displacement algorithm. Its main idea is, given a 2D grid containing various values, to insert intermediate ones between each couple to induce details.

The new values are computed by using the mean of the surrounding points and by adding a small error to generate new features. The amplitude of the error must be inversely proportional to the iteration / to the distance between the points, to give the fractal-likef result (the addition of important errors in the first iterations gives the landscape its global shape while the following iterations add smaller and smaller details to the surface).

Here is a simple implementation of this algorithm for a 2D map, using modulos to evaluate the position of each points in the current square:

///
// Generate new intermediate points (and thus details) to the given heightmap, by using stochastic
// interpolation (Midpoint-Displacement algorithm).
// Parameters:
//	- currentMap : Array of points (square matrix)
//	- amplitude : Alterations amplitude
// Returns: New map, more detailed
///
function ComputeMidPointDisplacement(currentMap, amplitude) {
	var dim = Math.sqrt(currentMap.length)+.5|0;
	// We will add 1 new element between each couple of them, so the size will increase of dim-1:
	var newDim = 2*dim - 1;
	var newMap = [];
	
	for (var i = newDim; i--;) for (var j = newDim; j--;) {
		var iMap = i*newDim + j,
			// Index of the top-left corner of the square in the smaller matrix:
			iSMap = dim*(i>>1) + j/2|0; 
		// JS trick: Math.floor(X) = 0|X if X positive, and X>>Y = 0|(X/(2*Y))
		newMap[iMap] =
			i%2?
				j%2?// Element on an odd row and odd col: it corresponds to a square center,
					// so we populate it as shown in the following ascii schema:
					// 1  0  1		with X representing the current element, and the numbers the
					// 0  X  0		weight used to compute the mean value. So here, we use the value
					// 1  0  1		of the corners of the parent square to get the value of the new
					//				point, and we add a small error to it.
					((currentMap[iSMap] + currentMap[iSMap+1] + currentMap[iSMap+dim]
						+ currentMap[iSMap+dim+1]) / 4 + amplitude * (.5 - Math.random()))
					
				:	// Element on an odd row and even col: we give it the avg of the elements on
					// the prev. and next cols:
					// 1  0  0		Example
					// X  0  0
					// 1  0  0
					(currentMap[iSMap] + currentMap[iSMap+dim]) / 2
			:
				j%2?// Element on an even row and odd col: we give it the avg of the elements on
					// the prev. and next rows:
					// 1  X  1		Example
					// 0  0  0
					// 0  0  0
					(currentMap[iSMap] + currentMap[iSMap+1]) / 2
				:	// Element on an even row and even col: it's one of the square corners,
					// so we just give it the orig. val.:
					currentMap[iSMap];
		
		// Just to keep the values into control (boundaries):
		if (newMap[iMap] > 1) { newMap[iMap] = 1; }
		if (newMap[iMap] < 0) { newMap[iMap] = 0; }
	}
	
	return newMap;
}

// -------------------------------
// HOW TO USE IT - (Dirty) Example

function GenerateNewStartingMap() { // Generate a new random 2x2 map.
	for (var newMap = [], it = 4; it-- ; newMap[it] = Math.random()); return newMap;
}

var nbMaxIterations = 8,	// Max number of procedural iterations
	h = 16;					// Amplitude of the error
for (var i = nbMaxIterations, heightMap = GenerateNewStartingMap();
	i--;
	heightMap = ComputeMidPointDisplacement(heightMap, h /= 2));

One problem this algorithm has is the squared artifacts it generates. A variant has been developed, the Diamond-Square algorithm[3][4] to prevent them, but this requires another step per square, lengthening the function. To reduce those artifacts without exceeding the size limit, I made a mix of both methods, mixing points from the previous map and the already-generated ones from the new map to evaluate the value of the mid points:

		...
		newMap[iMap] =
			(i%2?
				j%2?// Element on an odd row and odd col: it corresponds to a square center,
					// so we populate it as shown in the following ascii schema:
					// 3  0  0		with X representing the current element, and the numbers the
					// 0  X  1		weight used to compute the mean value.
					// 0  1  1		
					(newMap[iMap+newDim] + newMap[iMap+1] + newMap[iMap+1+newDim]
						+ 3*currentMap[iSMap]) / 6
					
				:	// Element on an odd row and even col: we give it the avg of the elements on
					// the prev. and next cols:
					// 1  0  0		Example
					// X  0  0
					// 1  0  0
					(currentMap[iSMap] + currentMap[iSMap+dim]) / 2
			:
				j%2?// Element on an even row and odd col: we give it the avg of the elements on
					// the prev. and next rows:
					// 1  X  1		Example
					// 0  0  0
					// 0  0  0
					(currentMap[iSMap] + currentMap[iSMap+1]) / 2
				:	// Element on an even row and even col: it's one of the square corners,
					// so we just give it the orig. val.:
					currentMap[iSMap];
			) + amplitude * (.5 - Math.random());
			// By adding errors to every point (even thepreviously fixed one), we lose in
			// predictability but reduce a bit the visibility of the remaining artifacts.
		...

Midpoint Disp. applied to the Y-coord. (with tweening)
Display Source

... Now using the custom displacement
Display Source

Midpoint Disp. applied to the height coord.
Display Source

... One more time, using the custom displacement
Display Source

Coloring

Given the heightmap, it's quite easy to assign color to every point depending on its elevation: if the value equals the min. boundary, then paint it light blue to represent water; if it's above a given limit then make it white as snow; if it's somewhere between then make a gradient from vegetal-green to rocky-brown with a function using the height; ...

Colored Heightmap
Display Source



But getting a realist-enough result needs much tweaking, and also more inputs. One of the keys to add depth and realism to a landscape is to take lighting into account. Computing shadows is out of order given our 1K limit, but simulating penumbra is easy using the incline.

The traditionnal method to evaluate how much sunlight a surface gets is by computing the dot-product of the surface's normal (or local gradient) and the unit vector representing the direction from the center of the surface to the point representing the Sun. We thus get the cosine value of the angle between these two vectors - value which can be considered as proportional to the luminance of the surface. By multiplying it with the reflectivity of the surface (high for snow, low for vegetation, ...), you get your color's lightness (remember: « Learn to love HSL color system »).

Alas, here again, evaluating the local gradient and using a real dot-product was too long, so I approximated both operations by the following one:

$$pseudoGrad\arraytwo{i}{j}=(2 \cdot heightMap\arraytwo{i+1}{j+1}-heightMap\arraytwo{i+1}{j}-heightMap\arraytwo{i}{j+1}) \cdot 26$$

The shifting and incompletness in the evaluation of this gradient and its projection simulate the dot-product with a sun positioned somewhere north-east of the map.. Hacky, static and far from perfect, but it does the trick...

This value can be used for much more than simply evaluating the lightness. Lighting also impacts directly the elements in the landscapes. For instance, vegetation isn't the same in sunny places than in the dark coombes. And the quantity of snow isn't only function of the elevation but also of the surface's penumbra. So by using our pseudo-derivated value as input for the computations of the other color components (hue and saturation), we can give to our landscape much more nuances, imitating some natural phenomena.

For each point $(i,j, height)$ to display, I thus evaluate its color with the highy-personal-and-not-so-reusable following code (just so you can see the structure):

var pseudoGrad = 2+(2*heightMap[i*dim+j+dim+1]-heightMap[i*dim+j+dim]-heightMap[i*dim+j+1])*26,
	pseudoHeight = height-pseudoGrad;
ctx.fillStyle = ctx.strokeStyle = 'hsl('+[ // Using coercion for the ","
	25*(	// Hue
		(WATER_LVL<height)? 25/height					// Green to brown
		: 7 											// Blue
	), // can be approximated by: hue = 13.7*height*height-244*height+1143
	25*(	// Saturation	
		(WATER_LVL<height)? pseudoGrad*14/height/height	// Ground saturation: medium, depending on
														// the incline/height, giving us different
														// kinds of veget.
		: pseudoHeight									// Water saturation: high
	)+'%',
	(25*(	// Lightness
		(SNOW_LVL_DEC<pseudoHeight)? pseudoGrad			// Bright snow with random noise
		: (BEACH_LVL<height)? pseudoGrad/height*2		// Vegetation
		: (WATER_LVL<height)? pseudoGrad				// Bright sand
		: 1.8+Math.random()*.2							// Water with random waves
	// Some not-very-random-but-good-enough shadows/lights (casted by clouds for instance)
	))-6*Math.cos(i/47*Math.cos(height/3+j/57)+pseudoHeight)
]+'%)';

Colored & lighted Heightmap
Display Source



« It still looks ugly » ... Right, but we are adding relief effects to a flat surface, how can it look but disturbing? Let's put it into perspective.

Give it relief

The whole projection and rendering stuff has been covered in the previous articles[hop]. We just have to take each point $(i, j, height)$ of the heightmap and evaluate the position of the corresponding vertex $\hvecthree{x}{y}{z}$ with:

$$ x = i \cdot \frac{landscape.width}{heightMap.width} \\ y = j \cdot \frac{landscape.height}{heightMap.height} \\ z = height \cdot k \text{ ( k constant)} $$

... then project it into screen coordinates.

To check if the point should be displayed or not, we could use again the Painter's algorithm and its properties. Check the demo « Loom » to see how it looks.

The problem of this method is that it implies to sort an array containing all the elements (simple vertices or surfaces) waiting to be drawn. That is quite a heavy operation, given the level of details of our landscape. This is why I have to reduce the details when moving around in « Loom », or else the browser wouldn't be able to ensure a correct framerate (and would probably crash).

Another way to see the world

While polishing some aspect of the demo, I came accross an article[5] by @romancortes about his great entry for JS1K Love '12. There, he describes in details the rendering method he employed, combining the use of a z-buffer and a Monte-Carlo surface sampling. I thus decided to try to apply it to my landscapes, and create a new demo.

The idea behind this method is the following:

  • We define our whole scene by a function $f:\hvectwo{a}{b} \mapsto \hvecfour{x}{y}{z}{color}$, ie a surjective function $f$ which maps every $\hvectwo{a}{b}$ $\in$ $\arraytwo{0.0}{1.0} \times \arraytwo{0.0}{1.0}$ to a vertex $\hvecfour{x}{y}{z}{color}$ of the scene (our landform's surface in our case). It implies to find a way to express each element as an explicit-surface with its own unique domain.
  • We keep two buffers of the same length $canvas.width \times canvas.height$ : one to display the $color$ values of the projected elements (the canvas), and one kept hidden, for their $depth$ values (the z-buffer).
  • Each frame, we sample randomly a high number of points $\hvectwo{a}{b}$, evaluate the corresponding vertices using $f$, and project them into $\hvecfour{p_x}{p_y}{depth}{color}$. If the pixel $\hvectwo{p_x}{p_y}$ as already been drawn, we check the z-buffer:
    • If $zBuffer\arraytwo{p_x}{p_y} < depth$, the element is actually behind the one already drawn, so we do nothing.
    • Else, the element should be represented above, so we update both $canvas\arraytwo{p_x}{p_y}$ with $color$ and $zBuffer\arraytwo{p_x}{p_y}$ with $depth$.
  • As long as the camera and scene are static, we can keep drawing more random points each frame, until reaching a satisfactory density. If movements are required, then both canvas and z-buffer must be cleared (unless you keep the 3D information of each points and reproject them accordingly to the new transform).

However, we need some adaptations to use this method on our landscape. We don't have a continuous function to pick vertices out of our 2D discrete heightmap. We need a way to evaluate the height of the points all over our surface, and not only at the nodes.

A bit hollow, if we draw only the nodes...
Display Source



Triangulation + barycentric interpolation do the trick nicely. Here is how it works:

  1. Given a point $P = \hvectwo{x}{y} \in \mathbb{R}^2$, find in which square $ABCD$ (unit square in the heightmap) it is, with $A = \hvectwo{X}{Y}$, $B = \hvectwo{X+1}{Y}$, $C = \hvectwo{X}{Y+1}$ and $D = \hvectwo{X+1}{Y+1}$, $X$ and $Y$ being the truncated value of $x$, $y$, ie $\hvectwo{X}{Y} \in \mathbb{N}^2$.
  2. Estimate in which triangle - $ABD$ or $ACD$ - $P$ is, using the condition: $d_x > d_y \to P \in ABD$ with $d_x$, $d_y$ the decimal part of $x$, $y$.
  3. Evaluate the height of the point using linear interpolation with the implicit barycentric coordinates as weights[6]:
    $$ P \in ABD \Rightarrow height = h(B) + [h(A) - h(B)] \cdot (1-d_x) + [h(D) - h(B)] \cdot d_y \\ P \in ACD \Rightarrow height = h(C) + [h(A) - h(C)] \cdot (1-d_y) + [h(D) - h(C)] \cdot d_x \\ (h(K=\hvectwo{X}{Y}) = heightMap\arraytwo{X}{Y}) $$

Barycentric Interpolation of colors
Display Source



We are thus able to describe explicitly our surface. We can now wrap this whole process into a function...

$$g: \hvectwo{x}{y} \in \mathbb{R}^2 \mapsto \hvecfour{x}{y}{height}{color}$$

...($color$ being computed by a function similar to the one described previously) and plug it to the Monte-Carlo sampling by defining $f$ as...

$$f\hvectwo{a}{b} = g\hvectwo{a \cdot heightMap.width}{b \cdot heightMap.height}$$

Nothing else to do but to put everything together, and that's it:

About 850B once crushed
Display Source



If you want to know more about how not to use the z-buffer or why the hilly parts of the landscape take longer to be correctly rendered, you should read the Stack Overflow thread I made when I was trying to fix some glitches.

Conclusion

With this article, I've finished covering the mathematical angle in my JS1K '13 demos. I was thinking about briefly describing the implementation process and the useful Javascript tricks, but @BlurSpline already did a great presentation[7] you should watch if you're interested in JS1K.

So to conclude, I highly recommend participating in such contests to any enthusiast-programmer - at least once. They represent an immersive playground to improve your skills. They restrict your ressources (time, size, ...) which results in a boost of your motivation and efficiency. And finally, they offer a great opportunity to meet highly talented people sharing the same masochistic relish as you, and to learn from each other. A great experience.

IMHO best compression scheme: imagination+time+math skills.

― Philippe Deschaseauxtweet, winner of JS1k Spring '13

References

  1. ^ Wikipedia - Fractal Landscape - en.wikipedia.org/wiki/Fractal_landscape
  2. ^ New York University - Profile - Ken Perlin - mrl.nyu.edu/~perlin
  3. ^ Wikipedia - Diamond-Square algorithm - en.wikipedia.org/wiki/Diamond-square_algorithm
  4. ^ Paul Martz - Article - « Generating Random Fractal Terrain » - gameprogrammer.com/fractal.html
  5. ^ Román Cortés - Article - « 1k Rose » - romancortes.com/blog/1k-rose
  6. ^ Dianne Hansford - PDF - « Barycentric Coordinates » - farinhansford.com/dianne/teaching/...
  7. ^ Joshua Koo - Online Presentation - « The js1k Experience » - slid.es/zz85/the-js1k-experience
comments powered by Disqus