Tuesday, January 29, 2013

Time Is Not Free

This is a lesson I learned from EVE. There is also a relevant XKCD here. (http://xkcd.com/951/)  Basically, the idea is that my time - and your time! - should not be considered free. And as a result, anything you do with your time has some nominal cost associated with it, even if you’re not paying out cash during that time period.

The lesson in EVE comes from pretty much the most boring occupation in the game, mining. A fair number of players in EVE spend a significant amount of time mining. It’s boring, but that’s fine. Once you’ve mined your minerals, you can turn around and sell them on the market. This activity, of course, presents no problems. You’ve spent your time hoovering up space rocks, and now that time is being converted into the in-game currency of ISK. Great.


The problem occurs when a player begins manufacturing. They go out and mine their own minerals to build in-game equipment, such as modules and ships. Now we’ve got the potential for an economic problem and a logic failure. The less savvy player who does this may decide that it’s cheaper to build things this way, because after all, they got the minerals for free. They didn’t have to buy them off the market! They converted their own work into minerals, and now they’re free to build stuff for cheap!

Except it’s not for cheap. It’s at the cost of time. There is an easy fallacy to fall into. After all, how you spend your time in game is your business. And in this process I’ve outlined, no ISK is generated, since the player skipped the market step. They mine the minerals, they process the minerals, they use them to build the module. Where’s the problem?

The problem is that the cost is in time, and it has an ISK number associated with it. Other, savvy players of course realise this, and generally, they’re the ones setting market prices. So even if you think your module or whatever was built for ‘free’, what’s happening is you’re simply eating a sort of opportunity cost.

What is opportunity cost? Well, in this case, it’s the expense of mining versus the cost of just buying the module. I’m going to make up some numbers here to help illustrate the point. Let’s say it takes you an hour to hoover up 100 units of space rock. And let’s say space rock is going for 1 ISK per unit. So, at the end of an hour of mining, you’ve essentially got 100 ISK in your cargo hold. But you don’t want to sell it, because you need to build your module, which just so happens to take 100 units of space rock.


Here’s the tricky part. Let’s say this module is available for sale on the market, and it’s going for only 90 ISK.

So which is better; to buy the module on the market for 90 ISK, or make it ‘for free’? Well, if you sell your 100 units of space rock, you get 100 ISK. Then spend 90 ISK on your module, and ta-da, you’ve managed to make 10 ISK.

Your time was not free, and the module you were going to build was not going to be free. Thanks to market fluctuations and manufacturing times and many, many market factors, it’s important to stop and think about the real actual costs of the activities you undertake. ISK makes a useful middle step in EVE, even if you’re not earning it or spending it directly. How much ISK per hour could this activity net me? How much would I make or lose doing something else? This can get as complicated as you like, because EVE is a complicated game, but I hope my simple illustration will suffice.

This applies to the real world as well. My time, and your time, it’s not free. Right now, I’m eating an opportunity cost. I could easily be making somewhere upwards of 60,000 a year doing industrial work. Instead, I’m choosing to go to college, and filling out the broke college student stereotype quite well. The opportunity cost for me is about two years’ worth of industrial level salary. In exchange, I’m getting an education. Hopefully when I’m done, I’ll be able to get a higher paying job, but honestly, that’s not the point. I’ll certainly get a job I’ll find more interesting. But I -am- paying attention to the costs of my decision, both eyes open, and fully informed.

You can take all of this however you wish. For me, this doesn’t mean I spend every waking moment of my life trying to wrench the maximum value out of it. How droll would that be! I would become an exhausted, wretched man. But it is worth paying attention to how I am spending my time. So that way, as the XKCD comic points out, I’m not working for drastically less than minimum wage in search of better gas prices.

Tuesday, January 22, 2013

Mental State Saving

I was thinking about why I could go back and easily get back into a saved Bejeweled game, but not other, more complicated games, such as Arkham City or Deus Ex. It’s certainly not a question of favoritism. If you asked me which game I liked more, the answer will be Deus Ex, by a considerable margin. But what is it, then? I find it more difficult to load up a save of Deus Ex than it is to pick up my phone and fire up Bejeweled. There are many possible answers, but I think one of the more interesting ones to explore is hedged in complexity and mental state.

If we were to ask which game is more complicated, certainly, games like Deus Ex win by a landslide. They have conversation arcs, location maps, branching decision trees and story paths, inventory systems, and so on. Even a very simple single player FPS such as Serious Sam will have things such as which weapons you have, which map you’re on, and what stuff you have/have not picked up and secrets you have/have not found yet. Bejeweled, on the other hand, has very few simple rules, and while they may change from game type to game type, overall the playing field is very homogenous.

Similarly, the rules are different. For Bejeweled, the entire ruleset can be and in fact must be kept in the player’s head if they are to be successful. There are only a few ways to move gems on the field, and only a few ways to make successful combinations. Contrast this with a FPS where the minimum information you need is where you’re at, where the enemy is at, what weapons you have, and how much ammo.

I think the difference in picking up the game again, particularly long after the last time I’ve played it, comes down to the idea of mental state. Can the player hold the entire state of the game in their head, and do they need to? For simpler games, the answer is really no. I can pick up my saved Bejeweled game at any point and immediately get into it. The state is revealed in full, immediately, when I load the playing field, and the rules are simple enough that even if I’ve temporarily forgotten them some experimentation will quickly reveal them to me again. Contrast this with many other game types, where even the loading screen tips and reminders may not be enough. What objectives have I accomplished? Which side objectives do I need to pick up or already have? What’s in my inventory? Who did I talk to last, what’s my character’s relationship state?

Steps can be taken to mitigate the problem of restoring the player’s mental state, like the aforementioned loading tips and reminders, and of course many games implement something like a log book to help you keep track of what you’ve done in this save file. However, they tend to be necessarily incomplete; they do not contain all of the details, and they certainly cannot be expected to keep track of certain personal goals a player may have set for themselves ( like try to get a certain weapon early because you knew where it might be stored ).

However, in the end, the simply PopCap style games can be said to be devoid of player state. The player needs to bring nothing to the game besides a desire to play, and no matter where they were last or what they were doing, they can quickly and easily get back into gameplay. For me, this low barrier makes them often more strangely more alluring than trying to run Arkham City again and try to remember everything I feel I need to know as well as what button throws the freeze grenade if I even have it. Of course, I still like the big deeper games better. The depth is welcome, the storylines intriguing, and a good game can get me to explore myself a bit.

But it’s still interesting to think about, both as a player ( why do I have more hours logged in on my iPad than my PS3? ) and as a budding game designer. I’ll need to keep these things in mind as I make large and complicated games. Saving and loading the state of the game, that’s easy; getting a player back into a game after months of them not touching it and trying to restore their mental state sufficiently to a point that they’re not repeating tasks or getting frustrated, that’s hard. Difficult, but definitely worthwhile to try and overcome.

Tuesday, January 15, 2013

Recursion

Recursion is an idea that seems very hard to ‘get’. Thinking about it doesn’t come naturally to most, myself included. While I can’t help you get to an ‘ah-ha!’ moment, I do have a mental framework for how to very quickly build a certain class of recursive function.

First, what’s recursion? It’s anything that repeats itself in a self-similar way. For programming, this can be generalized to the idea of it being a function that calls itself repeatedly to do its task. A simple C++ example for calculating a factorial is this:


int factorial ( int x ) {
if ( x == 1 ) return 1;
return ( x * ( factorial( x - 1) );
}

You’ll notice in the return statement, the factorial function calls itself again; that is a really simple example of recursion. This calculation could also be done iteratively, that is to say, using a loop or goto instead. I could also recode it to use a specialized variant of recursion known as tail recursion, which can be more efficient, but for now, I’m going to stick to keeping it simple.

So, the first step that should be asked when making a recursive function, is if recursion is really necessary? While recursion can make for some neatly compact code and looks very clever, there are trade-offs associated. One of them is, in fact, cleverness. Coding to be clever can often backfire when you or somebody else has to go back and try to understand what the hell it was you were trying to do. Another tradeoff can be in performance. When a function calls itself, it is making a new copy of itself to do so. Every new recursive call is putting more and more data on the stack. A simple iteration loop avoids this problem.

So, we’ve decided to plow on ahead anyway, and make a recursive function. My first step is to start with the final step, or the simplest step of the recursion. What is the very smallest version of the problem I am dealing with?

For the factorial problem, it’s the factorial of 1. You don’t need to even do any multiplication for that; just return 1. For making a list or a tree, it’s adding a node to the empty list or tree. For the fibonacci sequence, I actually have two ‘simplest’ conditions, the first two numbers in the sequence, 1 and 1. So I start with that.

Now, I go to the next most complicated step, and think about what’s needed. For the factorial, I now need to do some multiplication. What’s factorial of 2? That’s 2 * 1 = 2, or the more complicated case ( 2 ) multiplied by my ‘end’ simplest case of 1. You can see how this is accomplished in the code above. For making a list or tree, well, I need to traverse the list. So I check that the next node in line, using whatever sorting method ( or none at all ) I fancy. If the next node is null, or the next node is the one -just before- where I need to do an insertion, do the insertion; otherwise, call the function again, but this time with the address of the next node in line. For the fibonacci sequence, it’s a little more complicated, but not much. Since the fibonacci sequence is the sum of the two previous numbers in the sequence, I just need to make my return statement something like return ( fib( n-2 ) + ( fib n-1 ) );

And that’s it. Make sure my code does what I want it to, and call it done. It’s really that simple. The hardest code is for the list, and that’s because you’ll need to make sure you’re linking the nodes correctly. Otherwise, the algorithm for a linked list is, largely, ‘if null, make a node; if not, call this function again with the address of the next node’. Sooner or later, one of the function calls will hit the null node, and then it’ll just make a node there.

So there you have it. I hope this was helpful.

Tuesday, January 8, 2013

C++ Compiling

I’ve found that the college doesn’t really do a good job of describing what, exactly, is happening when you hit ‘enter’ after the “g++ filename.cpp” command has been typed in. This is a general post I hope to be able to point people at in the future, and will be a -very- broad overview of what is happening when a file gets compiled. I’m not going to talk about tokens or machine code or any of that; I’m going to cover, quickly, the most common preprocessor directives, and broadly what it is the linker does.

First, preprocessor directives. If it begins with a #, it’s a preprocessor directive. These directives are not C++ code; they’re instructions to the preprocessor, and as such, they are executed before actual proper compilation begins. The #include statement is essentially a copy and paste operation. The file named in the #include statement will have its entire contents copied, and then pasted into the file at the location of the #include. If you use #include “file”, with the double quotes, the preprocessor will start its search for the file in the current directory. If instead you use #include , the preprocessor will start looking at some location defined by your compiler, typically where your standard header files are at.

Another useful preprocessor directive is the #ifndef, #define, and #endif. The first one can be read as ‘if not defined’. This should be at the top of every one of your header files you make, and should be immediately followed by a #define statement. What this is telling the preprocessor is ‘if this hasn’t been defined yet, define it now’. Normally this definition will have a name similar to the name of the header file. So, for my header, nonsense.h, the full preprocessor directive should look like this:

#ifndef NONSENSE_H
#define NONSENSE_H

At the end of all the code in my header, I will put in a #endif. What this does is it makes sure I don’t accidentally try to compile the same header file more than once, even if it’s called by multiple files. So if I have main.cpp, nonsense.cpp, and whatever.cpp all with a #include “nonsense.h”, nonsense.h will still only be defined once. This is good, because nonsense.h should, as a header file, also have all my declarations in it, and C++ will get cranky ( read: not compile ) if it finds multiple declarations of the same thing.

There is another common use for #define. It is often used to create constants, for example #define PI 3.14. It’s worth noting that it’s not making an actual constant, like const int pi = 3.14. All it’s doing is forcing a substitution. When the preprocessor hits this particular #define, it will go through your code, and everywhere it sees PI, it replaces it with 3.14 instead. Remember, the preprocessor does not know C++. Be careful when doing this to not treat PI ( or whatever your #define is ) like a variable.

There are other uses for the preprocessor directives, but those are the most common ones. Now onto the linker. The linker is a promise keeper, of sorts. It checks the promises that you as a programmer have made, and makes sure that you’ve kept those promises in code. The promises you’ve -made- are your declarations. Keeping those promises happens in your definitions. A function declaration, for example, is this:

int factorial( int n );

That’s the promise. By making this declaration, you are promising the linker that later on, you will have a definition. Your definition might be something like this:

int factorial(int n ) {
if ( n == 1 ) return 1;
return ( n * factorial ( n - 1 ));
}

And that’s a promise kept. The linker also takes all the .o files generated during compilation and links them together into one executable file. Usually this will be static linking, where all the code actually exists in one executable file. However, you can also run across dynamic linking. Dynamic linking is complicated, and all I’ll say about it here is that when you’re using dynamic linking, the files will -not- all be compiled into a single executable file. Instead, there will be the executable file, and it will need some external code in order to run properly, usually in the form of DLLs ( dynamically linked libraries ) or SO ( shared object ) files.

Tuesday, January 1, 2013

Blog Revamp

I've noticed that a fair number of the blogs that I read on a regular basis tend to be focused in their subject matter. They're a combination of personal blog and work blog. Personal enough to make connections with readers and really reveal a little bit about the person writing. About work enough to have a tightly scoped subject matter, and to really reveal some interesting things about subjects I'm interested in. Also, since the people who bother blogging are usually experts in their field, I can learn new things from them. When those people are in -my- field, I often learn interesting new things and new ways to think about problems I'm facing.

As a result, I'm revamping my blog to 'fit in'. What I've mentioned above is only one way to make a 'good blog', but it's the way I'm going to follow. So instead of the previous model I was using, which was to treat the blog really very much like a personal journal, I'm going to narrow this blog's focus. My posts from here on out will primarily be concerned with my work in computer science. I'll periodically make an entry regarding my nuclear field work, since I feel everyone should know more about atomic processes, and maybe a bit about neurology or life lessons I've learned. And if I can't think of something to put up a particular week, well, I still have the recipe fallback.

Anyway. This is my little corner of the internet. I really don't expect anyone to read it, ever, but if someone does, I hope it either spurs conversation or they otherwise find it useful.