# Using LuaRocks

I just got around to publishing my first LuaRocks-installable module. It's not too difficult, but anyway I'd like to document the main links and steps toward doing this for future reference.

Here's the general process:

• Write a rockspec file, generally named as <module_name>-<version>.rockspec.
• Run luarocks lint <rockspec_filename> and sudo luarocks make to debug it.
• Tag your repo and push that tag to github; there is a github-free option as well.
• Make sure you've registered at luarocks.org and get your api key from your luarocks api key page.
• Run luarocks upload --api-key=<your_key> <rockspec_filename>.

If I were you, I'd also test that from the end-user installation perspective. On a mac, you can do this to uninstall the already-existing module:

• sudo rm -rf /usr/local/lib/luarocks/rocks/<module_name>
• sudo rm /usr/local/lib/lua/5.1/<your_module_files>

It's vaguely possible that sudo luarocks remove <module_name> does this for you, but why not just do it directly? Then run:

sudo luarocks install <module_name>


Instructions for creating a rockspec file are here. Instructions for uploading it to be publicly-usable are here.

As a reminder, this is how you set and push a tag to github:

git tag v1.0
git push --tags


After all that, your module gets its very own cozy page on luarocks.org at

luarocks.org/modules/<your_luarocks_username>/<module_name>


Wicked awesome.

# Apanga devlog #2

Path-finding bunnies

I've been integrating path-finding into Apanga. The smartest enemy is currently a bunny that can now chase you around turns or up/down complicated paths.

The first step was to dynamically create a navigation graph as land is loaded into memory. Here's a debug visualization of a local subgraph of this data around the player:

Check out how the graph climbs those stairs. Nice.

Building this graph was just the first step. So that an enemy can follow you, it needs to know when you're visible and it needs to find a short path through the graph to get to you. The visibility algorithm is essentially ray-marching through block space, and the search algorithm is based on ideas from the Apanga devlog #1.

I also more formally set up enemy behavior as a state machine with triggers to switch between states. The bunny class now switches between the wandering and chasing states based on player visibility. I plan to add an attacking state where the bunny can actually inflict harm on the player.

Here's a demo of the path-finding in action:

The current paths are all strictly grid-based, so the bunny turns at weird 90 degree angles along the way. I plan to smooth that path out in the future.

I also plan to support another graph variant aimed at the navigation of larger creatures - specifically player-sized creatures. The graphs are different since small creatures can move through smaller spaces.

## Big-picture thoughts

This is the first significant step toward intelligent non-player creatures. One question that has come up with Apanga is: How is this different from Minecraft? Two thoughts in reply:

1. Minecraft defined a new genre. Copying is boring, but being in-genre is fine.

2. Apanga is a world with its own story, character, and quests.

I honestly think Minecraft is so good - along with some predecessors like Dwarf Fortress that helped inspire it - that they have opened a new arena of game design. To be clear, what defines "the Minecraft genre" to me is being an editable large voxel world. Crafting, modding, and procedural generation are all major components as well, but I see them as less critical.

There will be many games that look like Minecraft. Many will be boring because they're just copies or don't add enough to be fun. Just like any genre of any media, the genre is a setting, and what counts is what you put in it.

What Apanga puts into the game world is a set of characters that you can connect with. I want the characters to have simulated emotions, relationships, goals, desires, and histories. When you help them on a quest, I want the characters to change and feel different. Their emotional levels will adjust, how they feel about you will adjust, what they want will change. Games have been working in this AI direction for a while now, and I hope to contribute some original ideas that exceed players' expectations.

# Some Kickstarter numbers

I plan to eventually have a Kickstarter campaign to help fund the creation of Apanga. I've previously run an unsuccessful campaign which was illuminating. I was sad that previous campaign didn't work out, but I connected with some passionate backers, and I walked away feeling that I was capable of succeeding - simply that I hadn't pulled it off that time.

I keep asking myself a mountain of questions about how to prepare for this campaign. This post is about finding data around two key questions:

• What is a reasonable fundraising goal for the campaign?

and

• What is a good price point for the game itself?

I found this page full of KS data from 2014, including this chart:

Games are a huge KS category, coming in 3rd place in terms of total successfully funded dollars at $89 million. With 1,980 projects funded, that's an average of about$50k per project.

This page has even more data. About 33% of game projects succeed. Not amazing odds, but I'm hoping that if I pregame enough my final odds will be much higher than that.

Some great analysis (1 | 2) has been done by Michael C. Neel, who produced this graph, which is specific to video games on Kickstarter:

Weeee math :)

# Finding memory leaks on mac os x

Mac os x has a few nice ways to avoid or isolate memory problems. To get to the good stuff, here's an awesome way to find leaks from bash:

$MallocStackLogging=1 ./my/app > /dev/null & <bash echos the pid>$ leaks <pid>

## Using ffi

I ran into trouble here because of symbol visibility at runtime. Basically, the LuaJIT engine needs to be able to get the memory address of your C functions at runtime based on their names, and sometimes you need to modify your code to make sure the symbols maintain visibility when your app is built.

Specifically:

In Xcode, everything worked beautifully for me in the default Debug configuration. Then I switched over to Release and suddenly ffi was broken. Xcode has a flag that took forever to locate that basically hides your symbols in release mode but not debug mode. You can circumvent that by prefixing your ffi-callable function declarations and definitions with __attribute__((visibility("default"))). Alternatively, you could turn off symbol hiding by search for symbols hidden by default under Build Settings and setting that boolean to No; then all your functions should be ffi-discoverable without needing to add __attribute__((visibility("default"))).

In Visual Studio, add _declspec(dllexport) before the declarations and definitions of functions you want to be ffi-callable.

To make this easier in my cross-platform code, I set up the following:

#ifdef _WIN32
#define lua__callable _declspec(dllexport)
#else
#define lua__callable __attribute__((visibility("default")))
#endif


so that a function I want to be Lua-callable is declared and defined like this in my C code:

lua__callable int my_func(int i);   // In the header file.

lua__callable int my_func(int i) {  // In the source file.
return 3;
}


Please email me if you use this note but have any trouble or if you have any suggestions to make it easier to use!

Have fun!

# Collision detection

Here's a cool page on 2d collision detection. I like the page because it has clear animations, it's written in coffeescript, and even better, it's actually written in literate coffeescript. I don't really know what literate coffeescript is, but here it is.

This is a random image from the page to give you a small preview:

# Walking Paths

I'm learning how to walk.

At least, I'm learning how to make the player walk smoothly in Apanga. It's much more difficult than I expected it to be. I blew through the first three iterations immediately:

• Jump directly to eye height over the foot point.
• Put the eye height at a linear-averaged point based on the nearest 2x2 grid of ground blocks.
• Tweak the last one to simulate a spherical base instead of a linear (prism-like) base.

Because the blocks in Apanga are conceptually smaller than those in Minecraft, you often don't need to jump, and you'll be walking over blocks constantly. The motion needs to feel nice. The above tricks don't cut it.

In order to improve the current setup, I've put together a small tool in Apanga to record the player's footpath and illustrate where they've been:

See how the yellow line wiggles up and down as the player walked up the block slope? Yea, that's no good. It feels like your head is spazzing out as you walk.

I gotta work on that.

# Jose Echevarria does cool stuff

Every once in a while I meander across a cool SIGGRAPH paper. Today I found a whole blog-full, so I thought I'd mention it.

Here are a few screenshots to get a preview:

There are froim Jose Echevarria's blog. Check it out.

# GL_TIMESTAMP doesn't work

I recently added GPU cycle metrics to Apanga. Basically, I measure, in milliseconds, how long it takes on the GPU to render a single frame, and then report both a 60-frame average time, and a 60-frame max time; I want both of these numbers to stay low.

There's a theoretically supported OpenGL call that would be perfect for this. It looks like this:

glQueryCounter(my_query, GL_TIMESTAMP);


This puts an OpenGL command in the queue to record a timestamp on the GPU when that command is hit. It's basically like executing this command:

right_now = time(NULL);


except that glQueryCounter is asynchronously executed on the GPU - like most OpenGL commands - so that the time interval you see between two GPU timestamps is completely independent of the time interval between when the CPU executes the two glQueryCounter statements. (That last sentence is confusing if you don't know that most OpenGL commands are executed twice - once on the CPU and then later on the GPU.)

Now we get to the interesting part: the above OpenGL command doesn't work.

It always returns 0.

I thought I was doing something wrong, and it was just my fault for not understanding things correctly. But after much investigation, I'm beginning to suspect it's actually just not implemented by my driver/card combo - and probably is left unsupported on other driver/card combos as well.

Hint 1: The officially documented example code also returns 0 for all timestamp queries.

Hint 2: The closely related GL_TIME_ELAPSED queries work fine for me, which tells me I have some clue how to work this stuff.

Hint 3: A number of other people have had the same problem and not been able to solve it.

So, just in case you're stuck trying to use GL_TIMESTAMP via glQueryCounter, I'd recommend switching over to running GL_TIME_ELAPSED queries. Some commands are weird and more involved to use, but at least they work. (Huh, I think I just figured out OpenGL's modus operandi in that last sentence.)

# oswrap

I wrote a little library to make it easier to simultaneously write the mac and windows versions of Apanga. It's called oswrap and it's open source.

This library looks smallish, which is always one of my design goals for a library. I think a good library is one that:

• you can learn very quickly;
• you can easily use correctly, can hardly use incorrectly; and
• you can reasonably dive into if needed.

The third point is the one that is most often ignored by library developers. It's useful because all abstractions are leaky. If you have a glimmer of understanding what's behind a library, then it all makes a lot more sense - the design, the quirks, everything.

Good code is about finding simplicity, not hiding complexity.

Check out the library here.

# Interesting Bug #1

A few days ago, I encountered some strange behavior. Whenever I typed anything on my laptop, the typed string would come out in reverse. If I typed "hello" the string would appear as "olleh".

My laptop had been working fine just a few moments earlier. I clicked into settings to see if I had somehow accidentally switched to some right-to-left language setting, but I hadn't. I tested a couple different apps and the effect was visible in all of them - it wasn't app specific.

I scratched my head and stepped back from my laptop to think about it. I walked over to pet my cat, Zorro, who had not long ago jumped up to relax on a nearby surface.

He was lounging comfortably on my bluetooth keyboard.

On the the left arrow key.

# The hardest and easiest way to be a better coder

Notes on a communication culture that denies our humanity

Imagine visiting an alien planet that is much like modern Earth. Its denizens are humanoid. They love music and stories. Their technology supports things like newspapers, trains, and building complex architectural feats. Their culture includes a reverence for well-organized education and an impressive system of medical care and research. Yet, despite their other advances, they have yet to discover electricity.

No computers, no internet.

No video games.

I think our modern programming culture has an egregious missing piece like this society's missing electricity. What we're missing is an appreciation of great everyday communication skills.

This might just be the easiest way to improve yourself as a programmer because the first steps you can take are simple and greatly rewarding. In the long run, though, it may also prove to be one of the hardest because it's a surprisingly deep and nuanced subject.

## The currency of code

Coder-to-coder communication, not coder-to-cpu, is the currency of programming.

It's easy to think about programming as telling the processor exactly what to do. When I'm actually coding, and not designing or talking or emailing or commenting, that's what I focus on - the precise steps that will be followed.

Stepping away from the trees, the forest is the experience of the end user. Certainly, the goal of programming is to solve a specific problem for the user, right? So where do coder-to-coder communications fit in?

The reality is that programming is an ongoing process. An economy is an ongoing exchange of goods and services - it doesn't end. There is no ultimate state where each person has received everything they want, and the economy is dismantled. Similarly, a useful codebase evolves, and that evolution itself is what programming really is - not a means to some theoretical snapshot of perfect source.

The only thing flowing between the agents of this process are noisy, undervalued communications.

## Why does programming culture undervalue communications?

It's often hard for programmers to appreciate the value of improving their communication skills. A few things can get in the way:

• People downplay evidence that suggests they're bad at something.
• It's easier to improve concrete skills than abstract ones.
• Most developers I know would consider communications improvements as "not a barrel of monkeys," meaning it's not fun.

Finally, and perhaps most significantly, developers tend not to talk about communication skills outside of HR-related events like interviews and performance reviews. What percentage of programming articles you've read were focused on this topic?

In a sense, it's tradition to de-emphasize good communication skills.

## Even coders get the feels

Another unspoken tradition among programmers is to pretend we don't have ugly emotions like anger, jealousy, or feeling protective of your code.

Imagine saying "I don't want to change this code because I feel attached to it." It feels awkward. It's weird to speak openly of the ugly emotions.

We have an ideal software engineer in mind, one who only experiences good emotions like excitement, curiosity, or a desire to be helpful. For some, the scorn of criticism is also admired or emulated - but I'll talk about criticism later. We hide the ways we're different from this ideal engineer. Feelings are delicate if we suggest a peer has an ugly emotion.

Yet everyone regularly experiences a wide range of emotion. It's natural. Maturity comes not from emotional changes, but from changes in the way we anticipate or respond to our feelings. When emotions are playing a large role in what we want, the most mature reaction is not to ignore them but to be aware of them, and respond with complete self-honesty.

It's extraordinarily useful to be aware of your own feelings. This is surprisingly difficult. Our most natural behavior while communicating is to respond directly to our feelings, rather than analyzing them before expressing anything. We don't usually think, "I'm going to say this next bit because I'm angry." It's something we can get better at with effort.

Even more difficult is to be respectful of others' emotions. When trying to guess how they feel, we have strictly less information than they do. Misunderstanding another person's feelings can be as bad as ignoring them - and even if we get it right, many people perceive the topic of their emotions as inappropriate.

It would be nice if coder culture accepted openly discussing emotions. The rationale of this taboo seems to be that the logical optimization of code has nothing to do with how you feel - but this denies the nature of programming as an ongoing collaborative process. Happy, positively motivated developers are more productive than disgruntled ones. Like bugs in code, feelings perceived as negative are inevitable. Instead of a protocol of denial, it's ultimately more productive for a culture to explicitly address such difficulties.

However, cultures do not change overnight. For now, coders who want to address a peer's feelings tread on dangerous ground. It is far easier if the peer initiates the topic. If you know them well, you could try to asking them how they feel, putting the question in the context that you'd like to understand their perspective without judgment. People feel more aligned with you if they see that you understand how they feel and why they feel that way. This is a great step toward productively addressing negative feelings.

## Code as communication

Other forms of communication are strongly detached from human emotion - but still critical. Specifically, let's consider communication in the form of code readability and maintainability.

When I aim for readable code, I think in two steps:

1. Make the code as readable as possible, pretending there are no comments.
2. Write clear, minimal comments that describe high-level concepts or possible confusion points.

Good function and variablenames are paramount. To get a sense of the standards you might apply, I recommend Apple's naming convention guide. For example, function names should be verbs by default. Class names should be nouns. Avoid words with low specificity such as "object."

Some languages, such as Java, and occasionally Objective-C, tend to enable ridiculously long names. Java has the infamous

InternalFrameInternalFrameTitlePaneInternalFrameTitlePane MaximizeButtonWindowNotFocusedState

while Objective-C has

willAnimateSecondHalfOfRotationFromInterfaceOrientation

among others. I think this is a mistake. Name clarity and length must be balanced. I would prioritize shorter names if I started typing something over 25 characters.

Function and file length are also strongly correlated with code readability. Files over 1000 lines, as a rough rule of thumb, could probably be split up. Functions over, say, 40 lines or so are candidates for refactoring.

It's also nice to have consistency in style elements such as indentation, line lengths, or capitalization. Google's C++ style guide is a great example of what can be standardized in a codebase.

After trying to make comments unecessary, the next step is to fill in the inevitable missing details with human-friendly remarks.

Comments add length to your files and, written poorly, add maintenance cost to changing code. The goal is to comment in a way that makes code changes easier. A bad comment duplicates the working details of the code - this is a problem since minor code changes require the comment to be updated. It becomes easy for the comment and code to reflect different things.

Briefly describing the high-level functionality of a block of code is great when the name does not tell the whole story. Some example high-level descriptions are "encapsulates our custom network protocol" or "module that compresses natural language strings." These descriptions are likely to remain relevant after many changes to the underlying code.

Some code is hard to make self-explanatory. A calculated size may require a non-obvious +1 at the end to avoid an off-by-one error; or some conceptually-easy operation could be performed in a complex manner as a bug workaround, or for performance reasons. It's easy to imagine a future coder looking at these cases and thinking, "why is this like this?" Such cases are excellent for short explanatory comments.

Comments exist only to improve the evolution of the code toward the ultimate end-user experience. They are only as good as they are readable and useful with minimal maintenance cost. Keeping this in mind, abbreviations, bad grammar, and sentence fragments can easily be detrimental to a good comment. I'm happiest working in a code base commented with clear, concise, and complete sentences.

Since beautiful code is a topic dear to my heart, I want to mention a virtually abandoned idea called literate programming. The idea is to write a single file that compiles out to separate pieces: one for human-only consumption as documentation, and the other for machine-only use as the running code. This hasn't caught on - probably because it adds a large chunk of engineering time to meticulously document the entire code base, and this doesn't make sense for most code bases. If this idea intrigues you, I recommend Donald Knuth's book Literate Programming. Despite its title, the book doesn't focus completely on literate programming, but rather talks about high-level questions of what makes code readable and maintainable. It imbues a heightened respect for how much skill one can gain in this area.

There's a tricky paradox in the traditional protocols of programmer explanations. If you under-explain an idea, the listener will need to put effort into filling in the missing details. If you over-explain, you're at grave risk of insulting the listener.

The paradox is that there's no realistic way to provide an explanation that at once provides all the information the listener needs, yet also avoids redundancy with what the listener already knows.

It would be nice if we could be more tolerant of slight over-explanations. Right now, we expect explanations to focus on information we don't know, so that any extra information hints to the listener that their knowledge is undervalued. Deeper than this idea is the sense that it's embarrassing not to know things. It's as if the ideal engineer is somehow done with learning forever.

Despite this paradox, there are steps you can take to minimize the damage of under- and over-explanations.

The first step is simply being considerate of the listener's perspective. If you're worried you might be over-explaining, ask a judgment-free question. Instead of asking, "You know jQuery, right?" you can ask, "Do you mind if I show you some things in jQuery?" The second form of the question accomplishes these goals when compared to the first:

• If they don't know jQuery, the first question implies an unmet expectation; the second implies it's acceptable to not know it.
• If they do know jQuery, the first question feels like a dodged bullet, while the second feels like an opportunity to politely impress with their knowledge.

As a listener, you can avoid being offended at over-explanations, and be brave in asking ignorance-revealing questions. In our graph, this means moving the red "feels condescending" curve a little to the left, so that it takes more over-explaining to bug you. This shift fosters productivity by reducing the amount of information that was left out of a discussion.

## The temptation to be mean

Ignorance-revealing questions are hard to ask because programmers, particularly in online discussions, can be harsh.

It's tempting for me to think, "People who write mean comments are jerks, and that's not me." But it's not constructive for me to dismiss a problem just because I don't cause it - and, to be deeply honest, such a thought is likely to be rationalized denial. I probably fall prey to the same pitfalls of human nature that motivate bad online behavior, even if it is to a lesser degree than others.

In other words, you can probably benefit from understanding mean behavior even if you're already nice.

### The perks of being mean

Saying something critical makes the speaker sound smart. They sound as if they know more about the subject than whatever is being criticized.

But they often don't.

Creating is far more difficult than criticizing. It's another world. It's easy to forget this when we see a criticism that sounds insightful. The critic reaps an easy reward in the form of a reputation boost. Kids pick up on this phenomena early and some of them turn into bullies. There's not as much difference between kids and adults as we like to think.

What are the unhealthy motivations of a harsh critic?

I think many critics subconsciously express themselves for the expected reputation boost. In other words, they criticize for the sake of how their audience perceives them. The nature of this "audience" is abstract. The audience could be "the internet" if they're commenting publicly, or the audience could be theoretical, if the critic is criticizing in notes to themselves. The audience could even be the subject of the criticism - human nature does funny things, and receiving criticism doesn't exclude us from bestowing respect on the source of our negative feedback.

I suspect there are other, less direct, motivations for overly-harsh criticism. Since programming culture is steeped in this behavior, community members could simply be emulating others. In society at large, there are some archetypical personalities that associate brilliance with emotional vitriol. For example, consider Hugh Laurie's character House or Benedict Cumberbatch's Sherlock. I can imagine programmers looking up to their brilliance and, perhaps subconsciously, viewing such a personality as one worth pursuing.

### Being nicer with negative feedback

Some negative feedback is ultimately a good thing. Sometimes things can be improved, or better decisions can be made by being aware of useful criticisms. Negative feedback is good when it's actually helpful and delivered well.

Start by understanding your own motivations and how your feedback is likely to be received. Imagine receiving the feedback yourself, either as the subject or as a third party. What parts of your message are most likely to help the listeners, and which can be left out?

A negative message is easier to receive when it's less personal and less general. For example, let's say that Bort introduced a security flaw into an SSL library. It would be bad to call it "Bort's bug," and talk about how "Bort always skips code reviews," as these are personal or generalize a single incident into an ongoing trait. Instead, focus on the fix, the consequences, and what can change to avoid similar mistakes in the future. It's good if Bort is privately made aware of the consequences of his actions and how to avoid future mistakes - but there's not a clear benefit to associating negative emotions with him beyond this.

A negative book review was recently posted to hacker news. Some commenters felt the review's tone was overly harsh, and a small debate on the tone of feedback ensued. I'd like to address the perspectives of three particular comments that defend the practice of harsh criticism (at least for this review):

Sorry, but I have a right to an emotional reaction to your content and a right to describe it, especially if the reaction is grounded in objective technical reality.

This perspective shifts the emphasis from what is the most helpful behavior to a question of what behavior is allowed. It's true that harsh criticism is allowed. Any author does have a right to express it, and may consider it an honest expression of their reaction. The perspective of this post, however, is to improve ourselves, reaching beyond what we're allowed to do into the realm of how we can be most productive.

Another comment from the discussion thread:

In the case of this review, I would say [the reviewer]'s tone is appropriate, because security is Serious Business.

The commenter is proposing that a serious subject justifies a harsh tone. The implication is that negative consequences may occur if the review were less harsh. Specifically, readers of the review may not realize how serious the topic is, and make security mistakes because the review was not harsh enough in tone. That reasoning sounds silly when spelled out. Expressing the gravity of a topic is independent of a condescending tone.

Let's finish by looking at one more comment that downplays the importance of being nice:

Could [the review] have been worded more kindly? Of course. Do I care? Not at all... It was informative and useful. The tone was just fine.

This last perspective is one I have seen many times - the idea that being right justifies being callous; or that expressing information is the only goal of communication, without regard to emotional impact.

But the goal of communication is to move forward - to make improvements, decisions, and get things done. These are positive actions taken by humans, and humans are fundamentally driven by emotion. To deny this fact is to deny our humanity.

The hacker news quotes are from this discussion thread.

# cstructs and msgbox

Lately I've been working on a couple pure C libraries.

They're both up on github.

First is cstructs, which provides an array, a list and a map. The array is a contiguous array in memory, while the list is a singly-linked list. These data structures are insanely useful for building other things; they somewhat replace things like Python's list and dict structures.

The other project is msgbox, which is a wrapper around C's standard networking commands. I took a look at 0MQ and know about libuv as alternatives, but I chose to build my own. 0MQ doesn't do udp and is more about reliability than about fault-tolerance. Udp support is critical for the game server I'm building. libuv looks cool, but there's a lot to it, and I can be maximally efficient with only a little.

Well, there's more to that story. The bottleneck with msgbox will be (when you run a server with thousands of simultaneous tcp connections) the use of poll in the run loop. It is nonblocking (when you set timeout=0), but poll has a known issue of being slow for tons of open connections.

However, I like working with very small interfaces. msgbox.h declares 13 functions for the entire interface; it's 100 lines long (for now). libuv's main header, uv.h, defines 205 functions over 2256 lines.

There's great value in using small libraries - libraries that you understand completely. In fact, I think this is one of the most underrated considerations in choosing libraries. Clarity through design, focus, and - perhaps indirectly - size.

Getting back to poll having potential speed issues, I plan to eventually wrap epoll within the same interface. If that works out, then msgbox users can have their small-library cake and eat their fast-library icing, too.

I was going to mention why I'm putting effort into using pure C when these are somewhat solved problems in other languages. That's a story for another post.

# The meaning of games

Do you want to play games like comics, or like novels?

A few weeks ago I started building a massive-scale game that I've been planning for a long time. It's an ambitious project that I can't wait to release - but I'm unsure what non-game developers would think of my goals. There seems to be a stigma around people with a life-long love of video games. Perhaps people imagine a maturity-stunted, ambition-free slacker playing mind-numbingly predictable games in most of their free time. I'm afraid video games, as an industry, may have earned itself a poor reputation along these lines, and I want to offer another perspective on what good games can mean.

## A way of expressing ideas

Let's look at games as a way of expressing an idea; I want games to be a category alongside movies, books, or songs.

Different categories have earned their own set of expectations. If I know someone loves to read books, I think of them as intelligent and willing to spend time alone. If I know someone loves art films, I think they have an artistic sense that they cultivate; if they love movies featuring explosions, I think of them as enjoying easy entertainment.

Comic books are an interesting case. Their stories are often in the superhero genre. Something horrible happens to some poor sap. They realize they have a way to change the world. And then, against great odds, they do change the world on a massive scale.

The superhero genre is so tightly tied to comic books that they're almost synonymous. This is interesting because there's nothing about comic books that requires their stories to be about superheroes.

Let's move over to games. This is the newest category of expression we're looking at. It, too, tends to be tied to a small number of genres such as first-person shooters or role-playing games. However, I think games have more in common with movies than comic books. Many well-known games are built or published with large budgets with predictable style and narrative elements. Yet there's an energetic, indie-driven subcategory where originality and creativity rule. I think of myself as part of this burgeoning movement.

But there's more to the story.

## Broken dreams

The music industry is full of talented, hopeful souls - and broken dreams. Making money with games has a few things in common with making music. Great music is made for its own sake, out of love for what is being made. Music and games are both popular and easy to daydream about. Games form an unusual niche in software development - a niche with such great supply of passionate developers that it can afford to treat them poorly.

Game studios generally don't care about sustainability because so many game creations are one-hit wonders. Turnover is high and fresh blood is cheap. Like recording studios, there's no balance of power between the money and the end-workers.

When I think of the negative perceptions of game-lovers, I think about the tired genres of comics, the originality-sapped movie industry, or the overworked, poorly-managed world of big-publisher game studios.

What's more important than these stereotypes, though, is the fact that games can be meaningful. Graphic novels can feature rich, dynamic characters; movies can burst with imagination; and music, even sung by starving string-pluckers, can reach the hidden corners of your heart.

All of this is a prelude to books - a form of expression that shines through its imperfections.

## Books

Writing a book is neither easy nor likely to be profitable. If it's any good, it takes a singularity of vision together with creativity, passion, and sustained discipline.

Books are not always good. Quite the opposite, in fact. If you chose a book at random at your local library, you'd probably be bored or disappointed. Like any expression of ideas, few are truly great.

Despite these shortcomings, I love the analogy that can be drawn between games and books. In fact, to me, this connection is what captures the meaning of games.

Books, more than movies, have a culture of defying genres. Unlike movies, I don't think there is a cohort of lexical executive producers pulling strings to pay for committee-designed, franchise-fueled book factories. Rather, the world of novel-writing has plenty of room and appreciation for creators who stray far from the beaten path.

It is the consequence of this culture of freedom that matters. Books speak to people. Reading a good book isn't time wasted. A good book brings to life a shared human experience. It brings to our life a story that finds us, and in some small way, that moves forward our perception of the world. What is life beyond a stream of stories given meaning by how they are shared with those around us?

In this way, I see a largely untapped potential for games as expressions of ideas. Like books, I believe the game industry itself can foster a culture of innovation. The growth of indie games could be the beginning of such a change.

## A new hope

The biggest difference between today's gaming culture and the future I hope to see is in the perception of what games mean.

In today's culture, when a designer sits down with a new game in the twinkle of her eye, she asks herself questions like:

• What's the market for my game?
• How can I make the short-term gameplay interesting?
• How can I make the long-term goal worthwhile?

Those are important questions. But they don't have to come first. These questions don't lend themselves to a perspective of exuberant freedom. These aren't the questions writers of great books start with.

I propose that makers of games can uplift their expectations and the very art of game-making itself. Perhaps we can think of ourselves as artists, conveyers of experiences profound. Perhaps we can build games by asking ourselves first, and last - as I imagine authors of great books did, consciously or not -

What can I give to the world?

My game library has been growing lately:

I plan to read some of each of these - anything that will help me build Apanga. I'm starting to realize why it's a bad idea to write your own game engine. There are an insane number of nooks and crannies to get your coder-brain lost in.

Honestly, I'm still on the redbook.

# Learning vs copying: when is it ok to reuse an idea?

The question raised by Flappy Bird

Indie developer Dong Nguyen recently had something of a live-tweeted crash-and-burn following harsh criticism over his game Flappy Bird. Folks claimed he copied gameplay and art from other games. The impression of these articles was that Dong had done something wrong.

I personally find Flappy Birds addictively challenging. It gets some things right: It feels like it should be easy. It feels like, once you get it, you'd be great at it. The controls and visuals are dead simple yet consistent and catchy. And it's ridiculously easy to start a new game when you die.

I used to believe that popular games always deserved to be so. I changed my mind when I saw the popularity of compulsion-based games that don't add much fun to players' lives. But I still believe that popularity is more than luck - and that there's more to Flappy Bird than randomly copied ideas.

Which brings us to the question:

When is it ok to build a game the clearly takes elements from others?

My gut feeling is that Dong did nothing wrong in creating Flappy Bird. The art is Mario-inspired, but he isn't selling it as Mario. The game mechanic is simple and old. Looking at all games, every major genre comes with its own built-in mechanic. Did Halo only copy the game mechanics of Quake? Is every platformer derivative of Mario? I don't think anyone would make that claim.

Let's jump into the perspective of a student. From here, learning seems not only acceptable, but virtuous. It's silly to throw away the lessons of past successes. How is learning different from copying?

In the case of academic research, one major distinction is that researchers want you to copy their efforts. They don't think of it as copying. To them, their purpose is to add to the cumulative sum of human knowledge. Each time you copy something useful that a researcher has published, you effectively confirm the work's value. It's win-win-win - I win, the researcher wins, and the game-player wins (since obviously what I'm doing is making a game; what else is research for?).

Perhaps the difference is in the intentions of the original creator. It seems unlikely that Nintendo wanted people to simulate their art style in future games that compete for a player's attention. Embedded in the spirit of intellectual property law is the principle that creators have the right to exclusively profit from their ideas - at least for a while.

Now let's jump to the other end of the spectrum. It would clearly be wrong if I copied the source code of Grand Theft Auto 8 - this post takes place in the future by the way - and sold it as GaarlicBread's Awesome Stolen Carzz 12. That's a brilliant game title by the way; I call dibs on it.

This goes two steps beyond illegal. Step one - it's clearly morally wrong; step two - it's so clearly morally wrong, I think players would disapprove en masse. Harsh words would commence and devious deeds would be done. Sandwich makers would "accidentally" put mustard in my sandwich when I clearly said no mustard.

Looking at these thought experiments - at one end, learning from voluntary teachers, to the other, with blatant digital product theft - it emerges that reusing an existing idea is acceptable when either:

1. The original creator of the idea wants it to be reused; or
2. The idea does not merit ownership, so that no one loses anything when you reuse it.

Case 1 is easy to understand. Case 2 is the interesting one. This is the case that confronts Flappy Bird. We can further boil down the essense of case 2 to this critical question:

Which ideas can you own?

Copyright law cares about this question - but my question is not one of legality. Rather, I want to investigate my own moral compass as to when an idea can be owned. The law attempts to be aligned with morality, but morality itself is the more fundamental concept.

I posit that an idea is owned to the extent that its value is based on the work of the creator.

An example: In 1872, Claude Monet painted Impression, Sunrise; the French title is Impression, soleil levant. This work was publicly lambasted, and the entire style of art similar to it - impressionism - was derisively named after this one painting. If a skilled artist meticulously copied Monet's original, the value of this new painting would be derived primarily from Monet's work.

Monet's Impression, soleil levant; like Flappy Birds, it met a critical reception.

Camille Pissarro painted in a style that shared many elements with Monet's work. However, Pissarro's images were original, and his style had independence. People admiring a Pissarro are unlikely to think to themselves, "self, this is only good because Monet is good."

In other words, we're asking: Where does the value of the new work come from? If it's from borrowed ideas, then it feels as if the ownership of those ideas extends to the new work.

Flappy Birds is fun because it's simple, looks easy, but is actually difficult. The art gives it a bit of personality. I don't think it's fun because some other tap-to-go-up game is fun. I don't think people play because it has Mario-esque elements.

My proposed answer does not provide a black-and-white dichotomy of works that are copies versus those that borrow, but are essentially new. We're talking about what value a work has, and where the value comes from - these are not clear-cut concepts.

If you do walk away from this post with one clear conclusion, I hope it's this: Always be creating. Take everything around you, everything good, and add yourself to that. What you end up with will inevitably build on the work of others, but if you succeeded to add something valuable, it will be yours.

# Don't mind my awesome over here

Shit just got real.

Today I started getting instanced rendering working. This means I can easily draw many thousands of replicants of a single model on screen with lethal efficiency.

It all started when I replicated one color cube thusly:

Those boxes were like tribbles. All up in your face and your jeffries tubes like what? More tribbles? Yes, more tribbles.

Nextly, I said to myself, "self, let's get some Perlin noise up in this joint." I'm basically homies with Ken Perlin because we both went to NYU. And by "went to," I mean either "is a professor with an academy award at, or, basically equivalently, was a grad student there."

At this point, I know exactly what you are saying.

You are saying, "WHAT? DID YOU SAY ACADEMY AWARD."

Yes. That is correct. Ken Perlin won a maternally-fornicating academy award literally (and I mean for serious literally) for writing an algorithm that generates random noise.

Back to my awesome.

So I threw together my crappy Perlin implementation and put a gradient height map on it to make things look like you know mountains and water and bam. We basically have a procedural game world already.

Don't stare too long as your eyeballs will grow mini-eyeballs of happiness and your eyeballs' new mini-eyeballs will start to cry with fractal-like joy.

That's one day's work, kids*.

*Where by one day's work, I am referring to the replicated cubes, Perlin noise, and heightmap-based rendering with color gradients.

# The so-called game I made last weekend

The Arbitrary Game Jam number 7 (#TAGJam7) was this past weekend. It's called arbitrary because two of the three themes are generated randomly. Not surprisingly, the themes seemed rather arbitrary and weird, like going to a hookah bar or that time all my socks disappeared and there was just a note that said "kibbles" in my empty sock drawer.

The weird themes of #TAGJam7 were:

• nitrogenize
• derogative
• squash

After last week's stunningly victorious color cube, I'm afraid I have bad news. My game for TAGJam sucks. Quite badly, in fact.

Why does it suck so badly? you ask. Mainly because I was learning a lot of new stuff, such as pixi.js and figuring out for myself how to write collision detection algorithms (which was fun to figure out).

So in the end there's basically one screen where the hero (Nitro) levels up by touching as many fish as he can. I had a good story all set up, but the new storyline is that Nitro has a fish-touching fetish and he can't get enough fish-touching. So, you know, go help him touch some fish. If you're into that kind of thing.

Here's the so-called game.

Oh also here's a screenshot.

Peace,

-- GB

# TAGjam7

This weekend I plan to build a (probably stupid and incomplete) game for The Arbitrary Game jam, 7th edition.

In case you are not already intimately familiar with TAGjam, I will tell you why it is called what it is called. It's because the themes for the jam are chosen completely randomly. This knowledge will partially explain this jam's decidedly weird themes:

• nitrogenize
• derogative
• squash

Since it is logically impossible to make any sense of these themes, I have decided to continue the streak of illogical things and to make an html5 game.

"Why, GaarlicBread, WHY????" you ask. "Why did you decide to make an html5 game, I mean?"

Well, reader, it's because I felt like it.

Actually, just now while writing this post, I thought carefully about my line of reasoning to see if there was a good one. There really isn't.

Anyway, that's what I'm going to do.

Latez, - GB, III

# OpenGL and I have just become cellmates in a Turkish prison.

And you know what that means.

Either OpenGL will be my bitch, or I will be OpenGL's bitch.

So far it's undecided.

But I have some awesome progress to report. Check out this glorious cube of pure visual ecstacy:

Try not to drool on your keyboard.

That is all.

# Apanga!

One thing that is useful for games to have is a name.

I'm not great at choosing names so I wrote a script to help me choose a good name. It uses an idea from probability theory called Markov chains. The name sounds like something complicated, but really it's not that hard.

The script uses groups of 3 letters in a row called 3-grams to iteratively build up a random word that sounds like it might be a real word. The input for the script is a text file with one word per line, from which it builds a directed graph. Each node is a 3-gram from the input file, and edges only exist between 3-grams if they've happened in the input file.

For example, the word hello will produce a path of 3-grams like this:

hel -> ell -> llo

The script pays attention to multiplicities so that each edge is weighted by how often it's been seen.

Finally, the script takes a bunch of random walks along these edges. The result of each random walk is a possibly new word that looks kinda like it could be real word.

I found a list of country names here. Here are some sample words produced from that list:

• luxembodia
• sahamas

As you can see, they sound pretty similar to the country names they came from. Not very original!

I had some fun using Welsh place names as inputs:

• llanfawr
• penymynannewydcoed

Haha, Welsh, you cray cray.

So I needed something between obvious-country-name knock-off and cray cray.

My favorite results came from combining about 25,000 place names with about 10 times that many English words. These lists were pretty easy to get online. From that I started getting names I liked, and Apanga is the one I'm going with.

So! We have a name for our game!

Apanga.

It's also the name of the world.

And then I got apanga.net. The .com was taken. I know, I know. Look, Minecraft has .net, ok? And they're doing ok.

# I did something weird today

I got a new laptop. It is a windows laptop.

I feel quite uncomfortable about it.

But I suppose it was necessary, as people who play games seem to always have windows machines. I also am uncomfortable with this apparent correlation between my target market and this operating system. But I choose to give gameplayers the benefit of the doubt this time. It's probably some kind of vicious cycle thing.

You will be pleased to know that my laptop is made up of exploding asteroids with crystals growing on them. I was surprised to learn this when my screensaver showed it to me. But they have it on video so it must be true; and in retrospect I can no longer imagine any other type of asteroids being the building blocks of my new laptop.

I will now briefly explain why I hate Microsoft. Mainly it is because they are jerks. More specifically, they are jerks who do untoward things to my industry. Yet even more specifically, I would have to list some boring details such as the way they sell substandard hardware (think xbox failures) and substandard software and churn out nothing but propietary frameworks and vendor-locked standards designed to carefully test the tolerance limits of a public bound by a near-monopolistically dominated market.

When I think of Microsoft, I think of a cute innocent little kitten with a playful attitude and big eyes. Then Microsoft finds this kitten and places it into a small locked room with nothing but a pen and a 300-page end-user license agreement. The kitten can neither drink nor eat nor see the light of day until it signs the agreement. The kitten does not understand the agreement, but it can sense that something is amiss. The hours pass silently, hungrily, without mercy. Finally the kitten, near starvation, picks up the pen, holds it over the agreement - and hesitates. A small metal panel opens in the ceiling, and the Microsoft eye peers in, purring, "ssssh... don't think about it. Just let it happen."

# Learning OpenGL

I hated learning history, and I loved learning math. History felt like a collection of arbitrary, probably biased, boringly-presented and disconnected facts. Math felt like something far more pure and coherent. Math is a beautiful story that feels more discovered that invented.

Ironically, many people hate math the way I hate history.

The point I'm meandering toward is that OpenGL appears to me to be an unmitigated clusterfuck of a library. And I do not say that lightly. I once braved the untamed wilds of windows driver development. I've debugged multithreaded memory errors. I once even had to - gasp - read someone else's code.

So let's just move on and accept that they're masochists.

With that in mind, I have a few suggestions for the designers of OpenGL to help further maximize the pain induced on their users.

## 1. Longer documentation

The current manual is barely over 900 pages. Seriously? I can actually read this thing in a month if I have nothing else to do. You can do better.

## 2. Make it harder to compile first example

I only had to cross reference two other technical sources before I could get the first learning example to compile. I respect that you omitted the source of a crucial function - really, that was a brilliant move; but it only took several hours of work to recreate the missing function.

## 3. Add more custom languages

I must give credit for creating a new programming language that lives entirely within a library. Especially since we must understand its custom compiling, linking, and symbol lookup mechanics in detail.

However, I don't know why you stopped at one. Why must all shaders use the same language, for example? C'mon, guys. Think.

## 4. More confusing names

When I think of a function that maps all vertices to new vertices, the first noun that comes to mind is shader. It's fairly obvious to use the word fragments for pixels, too. I could go on, but you get my point. All your names are obvious.

## 5. Harder-to-use version changes

I was disappointed when you stopped using different version numbers for different parts of OpenGL that had to work together. I guess you get some credit for writing manuals that only cover one version at a time without addressing forward or backward compatability. That's a good move. And supporting wide-spread use of custom extensions that may or may not become standardized over time.

At the end of the day, I guess they do a decent job of code torture. I hope they learn from my suggestions.