Tile-Based Discrete Wavefront Propagation
Posted on December 02, 2013 at 22:03
Originally posted on my blog
. It involves math and code, so a lot of formatting has been lost on the bastardized 64digits version.
I'm currently building a very simplistic first-person shooter in WebGL. An example of this algorithm, with the debug code left in, is available here
. The map is represented by a grid - a cell is solid if its 0, and empty if its 1. This is trivial to render by using a standard recursive 4-direction flood fill algorithm
. Unfortunately, using WebGL without compiling to asm.js from another language reduces the performance of my computer to a pile of junk from 2003, except that now all the API calls are 10 times as expensive. While my GPU is perfectly capable of bulk-rendering an entire map without breaking a sweat, WebGL starts choking the instant I try to make more than 50 or so draw calls. It's kind of pathetic, really.
Frustum culling is the obvious answer, but I need it to be highly efficient. This means adjusting the frustum to account for corners, so I only render the visible portions of the level. While a lot of the speed concerns I currently have can be alleviated using batch rendering, this will stop working the instant I put in details that can't be batch rendered. Consequently, I want the algorithm to be as close to perfect as possible, only rendering rooms that are visible and no others.
This is where Tile-Based Discrete Wavefront Propagation comes in. The central idea is to represent the viewing frustum as a wave that flows through the level, getting split up when it hits corners. If properly implemented, this will be exact
, only rendering rooms that are visible and no others. It turns out that you can implement this in O(n) time, but we can't use the naïve recursive flood-fill anymore, because it's depth first. If we want to simulate a wave, we need to render the grid using a breadth-first search, to simulate it slowly spreading outward. Breadth first search is implemented using a queue by simply pushing all neighboring nodes on to the queue and rendering them until the queue is empty.
As the wave propagates through the level, we adjust the left and right angles of each individual wavefront according to the walls that we hit. This requires that we deal with two possible cases: the case where a wall is in front of the wave as it propagates through the level, and the case where it isn't. The only time the wave can actually be split up is when a wall is in front of it - the rest of the time, it is simply clipped on the sides. "Front" and "side" are defined based on what direction the wave came from.
Starting with the case where there isn't a wall in front, we check to see if there is a wall on the right. If there is, we check the angles of it's corners. If either angle is inside the culling frustum, we change the right-angle to the one that is "farthest inside" (done in code by simply checking each angle one after the other). Then we do the same for the left wall, this time changing the left-angle, if appropriate. Then we send the updated angles to the neighboring tiles - the left one gets (left-angle,new-right-angle), the front gets (new-left-angle,new-right-angle), and the right gets (new-left-angle,right-angle).
If there is
a wall in front, everything stays the same (note that the front wall has it's own set of corners you must check), but the angles that get sent through the neighboring tiles change. The left wall gets (left-angle,new-left-angle), the front doesn't exist so we ignore it, and the right wall gets (new-right-angle,right-angle). This effectively splits the wave down the middle, but it makes things complicated. Before, if a new left-angle was outside the culling frustum, we would just set it to the old left-angle. This doesn't work anymore, because we're splitting the wave, which means we require a new left-angle that isn't equal to the old left-angle. To solve this, if the new left-angle fails to exist, we set it to the old right-angle
instead of the old left-angle, and vice-versa for the new right-angle.
Conceptually, this isn't too complicated, but the actual implementation gets complicated very fast due to angles being a giant pain in the ass to work with. We have to seed the initial queue with the neighboring tiles, and deal with getting proper frustum culling working, which is far more difficult than it seems. This will require a sizable number of utility functions, so we'll look at those first:
First, the modulo operators in most programming languages are actually remainder
operators. They do not perform mathematically correct modulo, and their behavior when you feed them negative numbers is not what you'd expect. The first thing we do, then, is define a modulo function that actually behaves like the modulo operator. We then use this to build a circular queue that resizes itself when necessary. When resizing, the queue must move all items to the right of the index over, which is a costly operation, so we let it keep it's size across frames. This makes the number of resize operations essentially zero.
The next few functions deal with angles. Angles are inherently circular, and this causes all sorts of problems. If our left-angle is 355°, and our right angle is 5°, the distance between these two angles is 10°, not 350°. There is a standard method to getting the absolute distance between two angles using the floating point modulo operator, which is implemented in getAngleDist()
. This is all well and good, but we have defined left
angles, which means we need to know if something is on the left hand side, or the right hand side. This requires a signed
angular distance function, which makes things much more complicated because, once again, the floating point modulo operator is not actually modulo
, it's a remainder
So, we need to implement a proper floating point modulo operator. The standard fmod() function is implemented using the following formula
fmod(n,d) = n - trunc(n/d)*d
It's trivial to change this formula into one that gives us the correct behavior (remember, truncation is not flooring!):
fmod(n,d) = n- floor(n/d)*d
We can use this to define a function such that, as long as b-a, when forced into the range [0,2π), is less than π (or 180°), we get a positive distance. If it goes past π, it becomes negative and heads back towards 0, just like in the absolute value distance function, but now with the proper sign. We will use this later to determine if a tile crosses over our angular culling frustum. getdx()
, and exists_dir()
are all used to either bump x/y coordinates according to a direction, or determine if a specific tile exists. Now we get to the real function:
First, we push our 4 neighboring tiles, provided they exist, on to the queue, seeding them with appropriate directions that radiate outward from our starting point. Then, we start running through the queue, and do an initial check to see if we already dealt with this tile. If not, we check to make sure that the left-angle is really less than the right-angle, and then go into standard frustum culling. In order to figure out if a tile is within our view, we need to ensure that the entire tile is either to the right or to the left of our angular viewing frustum. This means that all 4 corners must have angles outside of our frustum, and they must all be on the same side
. If they are not on the same side, then the cone could have passed through the center.
This is what the first two if statements do. The problem is that a tile directly behind us will also be considered straddling the angular range, because our distance function wraps at 180°. To solve this, we have a third if statement that throws away everything that is more than 90° (or π/2) away from the center of our frustum. This means the maximum horizontal field-of-view is 180°. Note that this culling is independent of the algorithm, you'd be using this same logic if you were only doing simple frustum culling.
If the tile survived the culling function, we mark it as visited. It's important that we only mark a node as visited after we have decided to render it
because of a very nasty edge-case that can happen during the breadth-first traversal. It's possible for a tile that is rendered on one side of a split frustum to reach a visible tile on the other side of the split, and erronously mark it as invisible, using it's own frustum. The tile actually is visible, but only to the frustum on the other side of the split.
Then, we check the appropriate corners and assign langle
appropriately. The neighbors are iterated and new values passed as needed. Once this phase of the algorithm is completed, we call reversefill()
, starting on the same tile we called drawmap()
with. This will use the standard fill algorithm to reset the "visited" marks on the map. By doing this, we avoid having to set all 2500 nodes of a 50x50 map to 0. The algorithm is now complete.
Because this algorithm runs in O(n) time and is exact
, it is asymptotically optimal. The problem is that it is only optimal in a single-threaded sense. The breadth-first iteration technique allows us to run each individual level concurrently, but this will only reduce the worst-case complexity from O(mn) to O(max(m,n)). 50 is a lot better than 2500, but this is only for a very, very large room. If we're dealing with a hallway, the concurrent performance would be identical to the single-threaded performance!
Consequently, while this is useful for tight corridors, the algorithm itself isn't really that useful outside of the game's narrow domain. Once the rooms start getting large enough, you're probably better off brute-forcing most of it and using approximations. However, the algorithm is exact
, which is very important for calculating Enemy Line-of-Sight
. So if you're dealing with a grid-based game and need to figure out if an enemy has a direct line of sight, this technique allows you to make a precise calculation, since it only hits tiles that are visible, and no others. This gives it an advantage over naïve raytracing, which requires many rays and is prone to giving false-negatives when dealing with narrow hallways.
Unfortunately, it still breaks when you try to make it work with FoV's greater than 180°, so unless you split them up into 90° chunks, it's pretty useless for things like lighting. I'm probably not the first to come up with the algorithm, either; someone probably invented this in their garage in 1982. Oh well.
Posted on July 11, 2013 at 22:34
Aurora Theory has been released! Buy it on bandcamp
for $9, or $10 on iTunes
, and Google Play
. The album is also available on Spotify, last.fm, other online radios, and can be previewed on YouTube
Aurora Theory has been 4 years in the making, a compilation of all the songs I managed to make in the middle of attending university. The earlier songs have been extensively improved, and all songs have been remastered for the album's release. This album represents nothing less than the highest quality work I can possibly manage at this point in my career. I'm not sure if anyone here even remembers me, but what the hell, why not! I'm only posting this on everything forever anyway.
1. Soar (Original Mix)
2. Aurora Theory (Redux)
6. Critical Damage (Extended Mix)
8. Dream Among The Stars
9. At The Nationals
10. The Cloud River
11. The Piano And The Cello
13. Spiral Nebula
14. Next Year
Giant List of Free Samples
Posted on December 14, 2012 at 23:19
Solar Noise - EP
Posted on December 07, 2012 at 20:12
My Solar Noise EP is now available for $3.99 on bandcamp
, Google Play
, Amazon MP3
, and a bunch of other online stores I've never even heard of! It has 5 tracks - Solar Noise, Tidal Forces, and 3 alternate mixes, including a remix by my friend Apelsin
, who is way better than me (and also mastered the album). Here is the track list:
1. Solar Noise (Original Mix)
2. Solar Noise (Club Mix)
3. Solar Noise (Apelsin Remix)
4. Tidal Forces (Original Mix)
5. Tidal Forces (Club Mix)
A full-length preview can be found on youtube
, and you may preview each individual track on bandcamp before making a purchase. Note that bandcamp is the only website that I know for sure offers lossless versions of the tracks. If you are a DJ in charge of a radio station, send me a message either on twitter or by e-mail and I will give you a promo code to download the club mixes for free. If you are interested in a physical CD, tell me! If enough people are interested in CDs I can make a print run of them.
What Is A Right Answer?
Posted on August 22, 2012 at 22:14
I find that modern culture is often obsessed with a concept of wrongness
. It is a tendency to paint things in a black and white fashion, as if there are simply wrong answers and right answers and nothing in-between. While I have seen this in every single imaginable discipline (including art and music, which is particularly disturbing), it is most obvious to me in the realm of programming.
When people aren't making astonishingly over-generalized statements like trying to say one programming language is better than another without context, we often try to find the "best" way to do something. The problem is that we don't often bother to think about exactly what makes the best answer the best answer
. Does it have to be fast? If speed was the only thing that was important, we'd write everything in assembly. Does it have to be simple? I could list a thousand instances were simplicity fails to account for edge-cases that render the code useless. Does it have to be easy to understand? If you want something to be easy to understand, then the entire C standard library is one giant wrong answer that's being relied upon by virtually every single program in the entire world.
For a concept taken for granted by most programmers, defining what exactly makes an implementation "optimal" is incredibly difficult. A frightening number of programmers are also incapable of realizing this, and continue to hang on to basic assumptions that one would think should hold everywhere, when very few of them actually do. Things like "the program should not crash" seem reasonable, but what if you want to ensure that a safety feature crashed the program instead of corrupting the system?
The knee-jerk reaction to this is "Oh yeah, except for that
." This phrase seems to underlie many of the schisms in the programming community. Virtually every single assumption that could be held by a programmer will be wrong somewhere
. I regularly encounter programmers who think you should do something a specific way no matter what, until you ask them about making a kernel. "Oh yeah, except for that
." Or a sound processing library. "Oh yeah, except for that
." Or a rover on mars. Or a video decoder. Or a raytracer. Or a driver. Or a compiler. Or a robot. Or scientific computing.
All these except-for-that
's betray the fundamental failure of modern programming culture: There is no right answer
. The entire concept of Right and Wrong does not belong in programming, because you are trying to find your way to a solution and there are billions of ways to get there, and the one that works best for your particular situation depends on hundreds of thousands of independent variables. Yet, there is a "right answer" on Stack Overflow. There are books on writing "proper code". There are "best practices". People talk about programming solutions that are "more right" than others. There are black and white, right and wrong, yes or no questions pervading the consciousness of the majority of programmers, who foolishly think that you can actually reduce an engineering problem into a mathematical one, despite overwhelming evidence
that you simply cannot escape the clutches of reality and how long it takes an electron to reach the other side of a silicon wafer.
If you ask someone how to do something the right way, you are asking the wrong question. You should be asking them how to solve your problem
. You didn't do something the wrong way, you simply solved the wrong problem
An Artist Trapped Inside A Software Engineer
Posted on August 19, 2012 at 07:47
Originally posted on my blog
Almost a decade ago, I thought I wanted to make games. I began building a graphics engine for that purpose, since back then, there were almost no open-source 2D graphics engines using 3D acceleration. It wasn't until later that I discovered I liked building the graphics engine more than I liked building games.
Times have changed, but I continued to tinker away on my graphics engine while going to college and learning just how stupid the rest of the world is. In the most recent bout of astonishing stupidity, my country has decided it doesn't recognize political asylum for people it doesn't like. It wasn't until reality had begun a full-scale assault on my creativity and imagination that I truly understood why artists feel compelled to lose themselves in their imaginations.
My imagination. It is something I could not possibly describe in any meaningful way. Art exists because some things can't be described
, they must be shown
. And yet, few things in my imagination are my own. I hunt down talented artists and visionaries, lose myself in the worlds they constructed, then take everything out of context and reconstruct my own worlds, perhaps based on another artist's vision, using the same concepts. I construct multiple visualizations, art styles, and game elements. My mental stage is fueled by awesome music, music that launches my imagination into incredible creative sprees. Sometimes I craft incredible melodies of my own, but rarely are they ever truly expressed in any satisfactory way in my music.
My life is one of creative frustration. I became obsessed with computer graphics as a way to realize my vision, but I wasn't interested in simply learning how to 3D model (which I happen to be terrible at, like everything else). I don't see the world as CGI, I see the world through the lens of a GPU. I look at things and ask, how might I render that? My imagination is not a static picture or movie, its a world that meant to be explored. Sometimes I play games for the storyline, or the gameplay, but the one thing that has always grabbed me is the ability to explore. I played Freelancer for 5 years, installed hundreds of mods, and was constantly enthralled simply by the exploration, the enormous universe, finding new systems, and discovering new places.
I can't draw a leaf. But I can create a mathematical model of it. I can calculate the textures and patterns, the branching veins and how each has their own specular, diffuse and transfer lighting functions. I can build abstractions and simulations, genetic recombinations and simplex noise algorithms. After I build tools to procedurally generate all the elements of a world, maybe then I can bring my imagination to life. But then, it's not really my imagination, it's what other artists inspire in me. I want to get as close to an artistic vision as possible, and beyond. I want to expand their artistic ideas and make them into something that is truly beautiful and inspiring, a clear extension of their vision, where it's soul shines like a beacon instead of being buried under bureaucratic bullshit.
I am an artist who cannot draw. I'm a musician incapable of painting the sonic landscape of my imagination. I am a dreamer who has no dreams of his own. If I am a programmer, it is because programming is the only way for me to express my creativity. But programming itself is not simply a means to an end. Programming is my paintbrush, my canvas, and my palette. I know how to read x86 assembly. I have abused C++11 lambdas to create temporary closures to hold a mutable state. I've crafted architectures and APIs and object-inheritance schemes and functional undo/redo stacks and lockless queues and kd-trees. Programming is my art and my music, and every new language or algorithm is another instrument for me to use when building my symphony.
Yet, many programmers hold little respect for alternative opinions. People who don't conform to strict guidelines are viewed as either terrible programmers or "cowboy" programmers destined to bring ruin to every project they touch. Everything must follow protocol, everyone must do things this way or that way. Instead of celebrating our diversity in programming languages, we viciously attack each other as using a "terrible language". Perhaps I have simply been inside a strange anomaly where everyone is obsessed with corporate practices and coding standards instead of building things.
Or perhaps I'm an artist trapped inside a software engineer.
Coordinate Systems And Cascading Stupidity
Posted on July 25, 2012 at 04:39
Originally posted on my blog
Today I learned that there are way too many coordinate systems, and that I'm an idiot (but that was already well-established). I have also learned to not trust graphics tutorials, but the reasons for that won't become apparent until the end of this article.
There are two types of coordinate systems
: left-handed and right-handed coordinate systems. By convention, most everyone in math and science uses right-handed coordinate systems with positive x going to the right, positive y going up, and positive z coming out of the screen. A left-handed coordinate system is the same, but positive z instead points into the screen. Of course, there are many other possible coordinate system configurations, each either being right or left-handed; some modern CAD packages have y pointing into the screen and z pointing up, and screen-space in graphics traditionally has y pointing down and z pointing into the screen.
If you start digging through DirectX and OpenGL, the handedness of the coordinate systems being used are ill-defined due to its reliance on various perspective transforms. Consequently, while DirectX traditionally uses a left-handed coordinate system and OpenGL uses a right-handed coordinate system, you can simply use D3DPerspectiveMatrixRH
to give DirectX a right-handed coordinate system, and openGL actually uses a left-handed coordinate system by default on its shader pipeline - but all of these are entirely dependent on the handedness of the projection matrices involved. So, technically the coordinate system is whichever one you choose, but unlike the rest of the world, computer graphics has no real standard on which coordinate system to use, and so its just a giant mess of various coordinate systems all over the place, which means you don't know what handedness a given function is for until things start getting funky.
I discovered all this, because today I found out that, for the past 6 or so years (the entire time my graphics engine has ever existed in any shape or form), it has been rotating everything backwards
. I didn't notice.
This happened due to a number of unfortunate coincidences. For many years, I simply didn't notice because I didn't know what direction the sign of a given rotation was supposed to rotate in, and even if I did I would have assumed this to be the default for graphics for some strange reason (there are a lot
of weird things in graphics). The first hint was when I was integrating with Box2D and I had to reverse the rotation of its bodies to match up with my images. This did trigger an investigation, but I mistakenly concluded that it was Box2D
that had it wrong, not me, because I was using atan2
to check coordinates, and I was passing them in as atan2(v.x,v.y)
. The problem is that atan2
is defined as float atan2(float y, float x)
, which means my coordinates were reversed and I was getting nonsense angles.
Now, here you have to understand that I was currently using a standard left-handed coordinate system, with y pointing up, x pointing right and z into the screen. The thing is, I wanted a coordinate system where y pointed down
, and so I did as a tutorial instructed me to and reversed all of my y coordinates on the low-level drawing functions.
So, when atan2(x,y)
gave me bad results, I mistakenly thought "Oh, i forgot to reverse the y coordinate!" Suddenly atan2(x,-y)
was giving me angles that matched what my images were doing. The thing is, if you switch x and y and negate y, atan2(x,-y)==-atan2(y,x)
. One mistake had been incorrectly validated by yet another mistake, caused by yet another mistake!
You see, by inverting those y coordinates, I was accidentally reversing the result of my rotation matrices, which caused them to rotate everything backwards. This was further complicated by how the camera rotates things - if your camera is fixed, how do you make it appear that it is rotating? You rotate everything else in the opposite direction!
Hence even though my camera was rotating backwards despite looking like it was rotating forwards, it was actually being rotated the right way for the wrong reason.
While I initially thought the fix for this would require some crazy coordinate system juggling, the actual solution was fairly simple. The fact was, a coordinate system with z pointing into the screen and y pointing down is still right-handed
, which means it should play nicely with rotations from a traditional right-handed system. Since the handedness of a coordinate system is largely determined by the perspective matrix, reversing y-coordinates in the drawing functions was actually reversing them too late in the pipeline. Hence, because I used D3DXMatrixPerspectiveLH
, I had a left-handed coordinate system, and my rotations ended up being reversed. D3DXMatrixPerspectiveRH
negates the z-coordinate to switch the handedness of the coordinate system, but I like positive z pointing into the screen, so I instead hacked the left-handed perspective matrix itself and negated the y-scaling parameter in cell [2,2], then undid all the y-coordinate inversion insanity that had been inside my drawing functions (you could also negate the y coordinate in any world transform matrix sufficiently early in the pipeline by specifying a negative y scaling in [2,2]). Suddenly everything was consistent, and rotations were happening in the right direction again. Now the Camera rotation actually required the negative rotation, as one would expect, and I still got to use a coordinate system with y pointing down. Unfortunately it also reversed several rotation operations throughout the engine, some of which were functions that had been returning the wrong value this whole time
so as to match up with the incorrect rotation of the engine - something that will give me nightmares for weeks, probably involving a crazed rabbit hitting me over the head with a carrot screaming "STUPID STUPID STUPID STUPID!"
What's truly terrifying that all of this was indirectly caused by reversing the y coordinates in the first place. Had I instead flipped them in the perspective matrix itself (or otherwise properly transformed the coordinate system), I never would have had to deal with negating y coordinates, I never would have mistaken atan2(x,-y)
as being valid, and I never would have had rotational issues in the first place.
All because of that one stupid tutorial.
P.S. the moral of the story isn't that tutorials are bad, it's that you shouldn't be a stupid dumbass and not write unit tests or look at function definitions.
Posted on July 21, 2012 at 11:15
Originally posted on my blog
For me, watching Microsoft for the past 5 years has been a lot like getting on a train, immediately getting off at the next stop, only to watch it explode 5 minutes later. You see, I was a high school intern there back in the summer of 2008, right before my senior year of high school. What I witnessed there I will never forget for the rest of my life, and continue to consider it the gold standard of how to not run a software development company. I live 10 minutes from Microsoft HQ in Redmond, Washington, and ever since I learned how to program I thought I wanted to work there. Until I actually did.
I was assigned to a testing team for which I was ridiculously overqualified for. Their job was to write automated tests and do other test-related work on a certain product. This product is, to this date, the most horrifying monstrosity I have ever seen in programming. If you tried to cold run the debug build without any optimization, it took 20 MINUTES
, on average, to start up. I actually tried it once just to see if it would really happen (it actually only took 18 minutes on my particular computer). I don't know what the fuck it was doing, but it was horrifying.
They managed to get this down to ~3 minutes by doing some weird dll-binding trick that shouldn't exist. Compiling to release mode under full optimization got it down slightly below a minute. This is perhaps the part that disturbed me most of all - I can see taking 20 minutes to load up some stupid amount of data for an equally stupid reason, but 20 minutes to load up DLLs? CODE?
But when they invented 2 useless programming languages for this specific project for no damn reason, I suppose that was somewhat inevitable.
Did I mention that the project is written in like 5 different languages, two of which were created specifically for it? They were all managed, of course, but that didn't stop the program from leaking memory out of its ears. When I was brought on, there were some very severe memory leaks from moving a window around, somehow, and they had only just managed to reduce
the leak from something truly horrifying to a more manageable 4 MB/hour or something to that effect. Forget trying to fix
memory leaks, the best they can do, when using a bunch of managed languages
, is just reduce the amount of memory being thrown into the shredder. It was unspeakably bad.
Microsoft is infamous for its ridiculous meetings, and for good reason. We had meetings roughly every other day. My team leader mentioned he wished he could code more, but almost all his time was spent at meetings. I had to learn to use Outlook just to organize when each thing was where doing what. We did occasionally have meetings on when to have meetings. They were always astonishingly boring. One particular event, however, stood out above all the rest.
One fateful day, we had a meeting where one of the major discussion points
on the to-do-list was that a function name was more than 50 characters long. No, I'm not kidding. They first discussed "Gee, is this name too long? Is 50 characters too long or can we get away with it? Should we just try to change it first?" Then they spent about 10 minutes trying to figure out what to rename this stupid function to
as I'm sitting here now seriously considering starting my own company because my god I couldn't fuck it up this bad if I tried
Of course, every now and then I got to tag along to a higher profile meeting instead of simple end-of-week team meetings, where my superior's superiors would be talking to their superiors. Did you know they had an entire program just for figuring out who reported to who? This was back when Bill Gates was still sitting at the top of the pyramid, to whom only Steve Balmer reported to. At the time, it seemed pretty silly, but now that I've seen Steve Balmer drive Microsoft off a cliff after Bill Gates left, perhaps that solitary, lonely line indicating Balmer reporting to Gates actually did
Naturally, as a result of this, there were protocols in place for when someone was out sick for whom to escalate issues to. If my team leader was gone, everyone below him was to report directly to boss of our team leader, and if that guy was gone, everyone below him reported to the next guy up, and so on and so forth. For one week, both my team leader and his superior were both gone, so I was actually reporting to my superior's superior's superior. It was during this week I got to visit a meeting between department heads and other very important people, because my superior's superior's superior thought it would be interesting (it was, for all the wrong reasons). But during that week, there was a 3 hour period when my superior's superior's superior was actually gone. They weren't really sure what to do about that, since at that point I would technically have to report any problems to the department head
, or my superior's superior's superior's superior.
I was really careful not to break anything for 3 hours.
The Managed Cult
I was subjected to multiple code reviews. At almost all of them, they thought my code was very good, except in a lot of places it was too clever
, or in at least one case, too elegant
(no seriously). So I had to keep writing clearly subpar code in case I "got hit by a truck" and Mr.Dumbfuck was assigned to maintain my program. Every single time I was told this I couldn't help but wonder why they kept hiring dumbasses in the first place. I couldn't use any interesting features of C# to craft elegant, rock-solid solutions, I had to do everything the hard, stupid way.
The irony of this is that the project I was working on was, by definition, using C# in a way it was not designed for. You see, it was supposed to be tying in to some low-level windows metric-analysis API. I thankfully inherited a large part of the actual API interface from someone else's attempt, and naturally it was basically an enormous hack. Fully half the program was simply PInvoke and structs using insane alignment hacks to mimic unions. In one case, the documentation for the marshaling was wrong. In another case, the function description for the native API was wrong. In the case I had to deal with, the function itself didn't actually do what the documentation said it did.
So my project was one giant hack, and in my code reviews I was forced to use simplistic coding practices. Of course, at the time, I had been learning C++ for several months, and whenever any C++ idioms snuck into my code, I would be sternly chastised and reminded that "C++ is a bad language", which really nailed home for me the fact that these programmers simply couldn't fathom a world outside of Microsoft and Windows and the neat little safe playground that had been constructed for them. Obviously, C++ was a bad language, but that won't stop us from desperately using C# in ways it should never, ever be used.
Near the end of my internship as I was digging into deeper and deeper metrics, I eventually found out that the thing they had asked me to do was actually completely impossible
. Certain critical API calls for an entire class of metrics were simply not available to non-native code. At all. No PInvoke, no API calls, no workarounds, nothing. So in perhaps one of the most hilarious revelations I have ever been part of, the team realized the project had
to be rewritten in C++.
After graduating high school and just barely
being accepted into the UW by some cosmic stroke of luck, I once again applied to the internship program. I halfheartedly requested to be put in something having to do with DirectX or graphics or high-performance computing, but when I stepped into the interview, I was told I'd be working in the Office division. Despite managing to figure out their retarded brain-teaser question which had absolutely nothing to do with how well I coded, I couldn't bring myself to care. Was this incredibly boring, well-paying job what I really wanted?
I lost the position to a close friend, and that was the end of my Microsoft career. I was secretly relieved, and used the opportunity to throw myself at my stupid little 2D graphics engine. By the time applications for college internships were due, I had realized that any sort of normal programming job just wasn't for me. Years later, it become apparent that I had narrowly avoided a catastrophic trainwreck.
Then I was rejected from the UW computer science major. But that's another story.
Puzzle Game Private Beta
Posted on July 18, 2012 at 08:39
I'm doing a private beta of a puzzle game prototype. It has almost no graphics and no levels, but almost all the functionality is there. Message me here or on twitter or via e-mail if you are interested. If you aren't really interested in the private beta, there will be a public beta after the game has art, levels, and a more finalized design in a few weeks.
What Do You Hate About Game Design?
Posted on July 11, 2012 at 01:24
More specifically, what is the most infuriating or otherwise painful part of game development? Listing multiple things is encouraged, with the most annoying listed first.