MAG64 write-up

Posted by Nopykon on Nov. 13, 2015, 5:31 p.m.

Remember that flight game I tried to make for MAG? If not, scroll down to my previous blog entry.

I wanted to turn it into an actual game before writing more about it, but since I haven't spent a single day of October or November working on it, I'll kill the project now with this techno-babble.

The last thing I did, at the time of the MAG deadline:.

I turned it into a Wipeout like jet fighter racer. It only looks like the game is working on the gif, in reality nothing is stopping you from flying through the walls. I have the functions to perform ray triangle collision, but I'm only putting them to use for bullets, and only for on segment of the track. But enough about the game.

No depth buffer

My JS-renderer does not use a depth buffer. Surfaces are instead rendered

back to front (painter's algorithm), and many of the pixels are overwritten over and over. To achieve this the triangles are sorted by their distance to the camera. I will not

go into detail about this. I use a bucket list. A max-heap might work well, or just quick

sorting or whatever. I guess it depends on what suits your data.


So you have a bunch of 3D-triangles you want to render form a certain view, at a certain scale and rotation. These things are achieved with matrices as you know. It's just like you've seen in a hello world glsl vertex program like this one…

out = projection*view*model*in_vertex;

…except I use js, and I bake all matrices into one before processing the vertices.

//combine matrices
var matrix=matrix4_mul(  projection, matrix4_mul( view, model)  ) ;

//transform vertices
for each vertex
	 tvertex=matrix4_mul_vec4( matrix, vertex);

//JS-psuedo, in real practice there are more things to keep in mind.


I don't. I should. The step that removes everything outside the frustum. In my engine, I only skip triangles that are completely behind the near plane, behind the camera or too far away. If a triangle has one vertex in, my engine pushes the other two in. This causes close triangles to look funny sometimes as the triangle is compressed. If I were to skip the entire triangle just because one of the vertices happened to be behind your feet, there would be holes in the ground.

Proper clipping would be the best way of course. Spoiled openGL users think they are oppressed, but they haven't felt the real struggle yet.


This step converts the still-3D vertices into screen space 2D. Another thing GL does for you.

In my code, I do something like:

screen_vertex.x=( tvertex.x/tvertex.z*.5+.5 ) * render_width;
screen_vertex.y=( tvertex.y/tvertex.z*.5+.5 ) * render_height;

//screen space is 2D, but I use z for perspective correction later.

First, xy are divided by z. This causes things that are far a away (large z) to be smaller.

The coordinates are also offset and scaled to fit the screen. In the 3D-world, [0,0] is at the center, and the screen covers -1 to 1. In 2D, we want it from 0 to width and 0 to height.

Now we have a bunch of screen space triangles that needs to be filled in.


Obviously done by openGL, and at great speed too. Some say GL is a rasterization library only. But I disagree, as openGL does more than that.

In no way can I explain this better than Fabian Giesen of Farbrausch fame.


(also check out his previous blog for the theory)


for each triangle

for each pixel in the min-max rectangle surrounding the triangle

check if inside the triangle

put pixel

I don't do it exactly like the code on his blog, but pretty much. In my engine, the entire "fragment shader" resides inside the loop, so it's a monster of a function.

Additionally to what's in the Ryg blog; remember that z I saved in the previous step? My rasterizer function divides each barycentric vertex weight by the z of that vertex. This gives you perspective correct results, something the PS1 could NOT do. It only had x and y to work with when rasterizing. In order for things not to look like shit, polygons (almost always quads) had to be subdivided to ease the ugliness. This is where those PS1 zic-zac textures comes from.

The sky

Oops, almost forgot about this. Basic version.

Each screen pixel is converted to a ray. The ray is tested against spheres (sphere ray intersection) . On hit, we draw a pixel of a planet, the sun and so on. Otherwise, the ray vector determines the color of the sky.

I also project the ray onto a plane for the cloud belt, the cloud intensity is just perlin_noise ( hit_point.x, hit_point.y).

I didn't mention the texture sampling. It's a single function, + the texture data ofc, similar to the wiki one:

The game

My real code is pretty unreadable. It was written to be small (js13k) not pretty. If you really really want to nerd out, here's the game in the state I left it at.

All code is there and you can open game/index.html to run it

Controls for the game:

TAB -> "autopilot" off

R -> switch between 320*240 and 160*240

U -> Ultra Rapid

BACKSPACE-> Change Camera

SHIFT-> throttle

Arrows -> Pitch and Roll

Alt + Left/Right -> Yaw

H -> Glitch Out



Jani_Nykanen 8 years, 7 months ago

How's possible no one has commented this blog? Shame!

I tried this on my not-so-powerful Linux Mint laptop. On Chrome, the FPS was 7, but on Firefox it was 10-30. It did look wonderful, though. This really makes me want to write a sofware renderer, too. Well… some day.

Nopykon 8 years, 7 months ago

It's funny and tragic how differently the various browsers behave. For a while there back when the engine had no textures, IE was twice as fast as FF. Although, that may have been some bad code from my part that FF for some reason didn't like.

I'm still pretty new at JS, there are probably some well known JS-traps/don'ts that I'm stepping right into. Style-wise, the game is coded more like a C program than JS. More comfy not having classes the way you do them in JS, I don't like it nor feel the need to.

Rasterization is ~95%, likely more, of the time spent per frame, and FF does the best job with that part at least. I think it's possible to draw image polygons using the canvas draw functions, hw-accelerated. That would be so much faster, but also boring. I would have to drop direct access to the framebuffer, and it would feel pointless to not just use GL at that point.

Ludum Dare is in three weeks. I am a little tempted to touch up the engine to use it there. For me it would be between that and allegro.js, but I will be in a team so you never know, might end up with Unity.