Thursday, December 29, 2016

Editions of Armor Class Part 1: No Armor

A few weeks ago one of Zenopus' excellent posts made me think about "armor class" in D&D again. On Google+ I even commented "I like that booklet too, but I really couldn't care less about AD&D armor classes. The original system just makes so much more sense. I feel a blog post coming on..." Sadly life happened and I couldn't find the time to write about the crazy in my head then. But now that I am safely tucked away in Germany for my year-end vacation, I figured I'd give it a quick shot.

Disclaimer: In true grognard-style I shall only consider "descending armor class" in the following. I don't care about "ascending armor class" systems, especially since the usual "selling point" of those doesn't apply once you use Delta's Target20 mechanic.

Let's begin with the classic AC 9 versus AC 10 debate: Should an average character who is not wearing any armor be AC 9 as OD&D, B/X, and BECMI profess, or AC 10 as AD&D wants us to believe? For me, any answer that doesn't immediately consult the combat tables is of a purely religious nature. It really doesn't matter whether we start at AC 9 or AC 10, what matters is whether there's a difference in running a combat. Here I prefer the B/X (and to some extent BECMI) story:

Two average, unarmored, untrained, "normal humans" should have a 50% chance per round to hit (and most likely kill) each other.

I don't know about you, but that seems perfectly reasonable to me. It certainly shouldn't be more than 50% because that would imply skill where (by definition) none should exist. And if it was less than 50% things would just drag out longer.

If you consult the B/X Basic Set, page B27, you'll see that a "normal man" indeed needs an 11+ to hit AC 9, a chance of 50% on a d20 roll. Perfect! As for the "most likely kill" part: On page B25 we find that a successful attack does 1-6 points of damage (average 3.5) while page B40 explains that "normal humans" have 1-4 hit points (average 2.5). Death incarnate!

In the BECMI Expert Set (but not the Basic Set, go figure!) we find the same "11+ to hit AC 9" for "normal man" on page 29; alas the BECMI Basic Set says that "normal humans" have 1-8 hit points (average 4.5) so things are a little less deadly (never mind the other problems this change causes, sigh).

What about OD&D? On page 19 of "Men and Magic" we find (perhaps surprisingly?) that "normal men equal 1st level fighters" which means they only need a 10+ to hit AC 9, a chance of 55% on a d20 roll. Granted, it's only a 5% difference, but that seems wrong to me. Why would an untrained combatant have the same skill as a trained one?

AD&D does away with "normal men" for the most part, replacing it with the notion of "0 level" characters (for which only humans and halflings qualify?). AD&D also recalibrates to AC 10 for "no armor" of course. Well, at least starting in the Player's Handbook it does, the Monster Manual seems to be written to the original AC 9 for "no armor" instead. But at least Gary manages to stay somewhat true to my B/X story: On page 74 of the Dungeon Master's Guide we learn that "0 level" characters need 11+ to hit AC 10, a chance of 50% on a d20 roll again. Of course average hit points are "off" as in BECMI, see page 88 of the Dungeon Master's Guide.

So then... What should "no armor" be, AC 9 or AC 10? As I said before, it doesn't matter! B/X (and to a lesser degree BECMI and even AD&D) get it "right" as far as I am concerned, only OD&D has it "wrong" since it doesn't distinguish trained from untrained combatants.

Of course there is one difference after all: I strongly prefer single-digit AC values, and so in the final analysis, AD&D is out. But I have to admit that this preference is mostly "religious" as well, not truly "technical" as it were. True, Target20 is easier with single-digit AC values, but since that's not a standard mechanic I can't really use it to "rationalize my irrationality" too much. Does "neater table layout with single digits" count?

Friday, October 14, 2016

The S.M.A.R.T. Overflow

Before today, I didn't realize that S.M.A.R.T. has overflow issues. I should say that I spent most of yesterday replacing three disks in my home machine's RAID-10, so I constantly was using smartctl to double-check stuff. So when I got to work today, I poked around the disks in my server. And I noticed this:

...
SMART Self-test log structure revision number 1
Num  Test_Description ... LifeTime(hours) ...
# 1  Short offline    ... 2484            ...

# 2  Short offline    ... 2469            ...
# 3  Short offline    ... 2445            ...
...

This made absolutely no sense, because I knew that most of the disks in there were certainly older than a few months. After some digging, I came to realize what the problem is:

...
SMART Self-test log structure revision number 1
Num  Test_Description ... LifeTime(hours) ...
# 1  Short offline    ... 1276            ...
# 2  Short offline    ... 62136           ...
# 3  Short offline    ... 61776           ...

...

The actual value for Power_On_Hours is 66812 for that disk. Let's subtract 1276 from that and what do we find? 65536 of course. So the life time recorded with each self-test is apparently an unsigned 16-bit value, whereas the total hours recorded for the disk itself is stored as something bigger. Here's to hoping that people who write scripts telling them which disks to swap are aware of this...

Thursday, June 30, 2016

Choice of Class: Enforce or Encourage?

For as long as I can remember, I've been a fan of "minimum ability score"-type requirements for classes. See, for example, my "addendum" at the bottom of this post or, more recently, these posts.

It just made sense to me that fighters should have (at least) average strength and that wizards should have (at least) average intelligence. In case of the wizard, I still think it makes perfect sense even from an "in game" perspective: Why would that archmage waste his or her time training a moron? In case of the fighter, well, maybe strength is not so important: After all, one purpose of having many soldiers (whether weak or strong) on the field is just so that more can die before whatever lord commands them actually loses a battle. (The term "cannon fodder" exists for a reason.)

But more importantly, I just couldn't see why anyone would want to play a weak fighter or a dumb wizard. So the idea of using "minimum scores" to control what classes a player can pick seemed perfectly alright, even if it meant that there would be a small percentage of characters who can't qualify for any class at all: Just re-roll those, problem solved.

However, it recently dawned on me that I don't speak for everybody. (Shocker!) What if there is a player who does enjoy that weak fighter? Maybe that fighter, while physically not "up to snuff" as it were (low strength), is a master strategist and leader (high intelligence and charisma)? True, the player could choose a wizard instead, but that would make for a very different type of leader, maybe not what they were going for.

(Things get worse if you allow players to multi-class. Now the ability scores would have to be good enough for two classes, something that's not particularly likely with 3d6 even if the minimum requirements are low-ish.)

Instead of enforcing the set of classes that a given character can choose from, it may be preferable to encourage certain choices (and to discourage others of course). The "good news" is that we already have a mechanic that does just that, and it has existed in all relevant versions of D&D since 1974: Experience point adjustments for prime requisites. Here's what the B/X rules say:


The ability most important to a class is called the prime requisite for that class. The higher the prime requisite score, the more successful that character will be in that class.
Basic Rulebook, Page B6

The details are not horribly important, and you probably know them anyway, so let's just summarize that the score in a prime requisite results in a bonus (or penalty) of up to +10% (or -20%) on the experience points earned by the character. Now I should first point out that I used to hate these adjustments. The math is not too horribly bad, but we're still left with a bag of questions:

  • Exactly what ability score counts for the experience adjustment? The initial score when the character was created? The current score which might be adjusted by a magic item or a curse?
  • Does a change in a prime requisite ability score retroactively affect the current experience point total? If so, raw experience points and bonus experience points should be tracked separately.
  • What is the "in game" justification for the bonus (or penalty) on experience points? The D&D experience system is already on pretty weak grounds (exactly how does getting richer make someone better at picking locks again?) and it seems these adjustments make even less sense in that light.

Note, however, that minimum ability score requirements actually raise many of the same questions: If you're a fighter with strength 9 but get cursed to have strength 6, are you still a fighter? If you rolled a character with strength 4 but the DM grants you Gauntlets of Ogre Power at character creation, can you pick the fighter class? And what happens when you lose those gauntlets? Also, regardless of whether we consider XP = GP good or bad, that's still the default in old-school D&D, so why worry when the system gets a little more insane? It's already nuts but most people don't really mind.

What I came away with after thinking about this for the past week or so was mildly surprising to me:

After 20+ years of disliking the prime requisite experience adjustment rule, I now think it's a stroke of genius!

It allows the player to make more choices for his or her character, and that's never a bad thing (well, almost never, let's hope newbies will have a supportive DM to cut down the number of choices a little). It encourages certain choices, but it doesn't force anything. In fact, coming up with a background that explains why that dunce with intelligence 6 was able to get training as a wizard actually sounds like a lot of fun.

In my particular house rules, things get even better. Say someone wants to multi-class as a fighter/priest to approximate a paladin-like character. If they have average strength and wisdom, they simply progress on the slower XP table for characters with two classes. If they have excellent strength and wisdom for a +10% bonus on XP each, they actually progress with +20% on that harder/slower table. If they are weak (-10% penalty due to low strength) but pretty wise (+10% bonus due to high wisdom) things cancel out: It's still a viable paladin-like character.

So I am off to rewriting my house rules yet again: No more minimum ability score requirements for classes! (Hmm, should I add some for races now?)

Friday, April 22, 2016

Tracking Positions in Go: Why Composition Rocks

In this post I discuss the genesis of the streampos package for Go that I recently posted on github. I don't normally write about code I post, but in this case I learned something cute that I wanted to share. Maybe you'll find it enjoyable too?

The story starts with me writing some Go code to process XML files. Of course I used the encoding/xml package from the Go standard library to do this. A few hours into the job I needed to generate error messages when something is not quite right in the XML file. (If you've dealt with large-ish XML files that get edited manually, you probably know that it's quite easy to make the occassional mistake that syntax highlighting alone will not protect you from.) In order for those error messages to be useful, they should tell the user where in the XML file the problem was detected. And that's when I ran into trouble: There's no straightforward way to get, say, line numbers out of encoding/xml!

Sure, their code tracks line numbers in support of their error messages (check
SyntaxError for example) but if you have your own errors that go beyond what the standard library checks, you're out of luck. Well, not completely out of luck. If you're using Decoder to do stream-based XML processing, you can get something moderately useful: the InputOffset method will give you the current offset in bytes since the beginning of the stream.

What do you have to do to turn that into error messages of the kind users expect, so error messages in terms of line (and maybe column) numbers? First you somehow have to get your hands on the raw input stream. Then you look for newlines and build a data structure that allows you to map a range of offsets in the stream into a line number. With a little more code on top, you even get out column numbers if you want them. Sounds like fun, but just how should we do it?

To use Decoder you have to give it a Reader to grab input from. This is promising because Go very much prides itself in the power that simple interfaces like Reader and Writer provide. Indeed, the pattern of wrapping one Reader inside another (or one Writer inside another) is fundamental in much of the standard I/O library. But we can also find those interfaces in "unrelated" parts of the library: The Hash interface, for example, is a Writer that computes hashes over the data written to it.

So the Decoder wants a Reader, but thanks to interfaces any Reader will do. It's not a big leap to think "Hey, I can hack my own Reader that tracks line numbers, and I'll pass that one to Decoder instead of the original Reader!" That's indeed what I considered doing for a few minutes. Luckily I then realized that by hacking a Reader I am actually making myself more problems than I had to begin with.

I want to solve the problem of building a mapping from offsets to line numbers. However, as a Reader, I also have to solve the problem of providing data to whoever is calling me. So whatever the original problem was, as soon as I decide to solve it inside a Reader, I immediately get a second problem. And it's not an entirely trivial problem either! For example, I have to consider what the code should do when asked to provide 37 bytes of data but the underlying Reader I am wrapping only gave me 25. (The answer, at least in Go land, is to return the short read. But notice that it's something I had to think about for a little while, so it cost me time.) On a more philosophical level, inserting a Reader makes my code an integral part of the entire XML thing. I never set out to do that! I just wanted a way to collect position information "on the side" without getting in anybody's way.

In the specific case of encoding/xml things actually get even funnier. It turns out that Decoder checks whether the Reader we hand it is actually a ByteReader. If it's not, Decoder chooses to wrap the Reader we hand it again, this time in a bufio.Reader. So either I have to implement a ByteReader myself, or I have to live with the fact that plugging in my own Reader causes another level on indirection to be added, unnecessarily to some extent. (That's yet another problem I don't want to have to deal with!) There really should be a better way.

And there is: Instead of hacking a Reader, just hack a Writer! I can almost see you shaking your head at this point. "If the Decoder wants a Reader, what good is it going to do you to hack a Writer?" I'll get to that in a second, first let's focus on what being a Writer instead of a Reader buys us.

The most important thing is that as a Writer, we can decide to be "the sink" where data disappears. That is, after all, exactly what the Hash interface does: It's a Writer that turns a stream into a hash value and nothing else, the stream itself disappears in the process. What's good enough for Hash is good enough for us: We can be a Writer that turns a stream into a data structure that maps offsets to line numbers. Note that a Reader doesn't have this luxury. Not ever. True, there could be Readers that are "the source" where data appears out of thin air, but there are no (sensible) Readers that can be "the sink" as described above.

A secondary effect of "being the sink" is that we don't have to worry about dealing with an "underlying Writer" that we wrap. (As a Reader, we'd have to deal with "both ends" as it were, at least in our scenario.) Also, just like in the case of Hash, any write we deal with cannot actually fail. (Except of course for things like running out of memory, but to a large degree those memory issues are something Go doesn't let us worry about in detail anyway.) This "no failures" property will actually come in handy.

Okay, so those are all nice things that will make our code simpler as long as we hack a Writer and not a Reader. But how the heck are we going to make our Writer "play nice" with the Decoder that after all requires a Reader? Enter a glorious little thing called TeeReader. (No, not TeaReader!) A TeeReader takes two arguments, a Reader r and a Writer w, and returns another Reader t. When we read from t, that request is forwarded to r. But before the data from r gets returned through t, it's also written to w. Problem solved:

lines := &streampos.Writer{}
tee := io.TeeReader(os.Stdin, lines)
dec := xml.NewDecoder(tee)

There's just one small problem with TeeReader: If the write we're doing "on the side" fails, that write error turns into a read error for the client of TeeReader. Of course that client doesn't really know that there's a writer involved anywhere, so things could get confusing. Luckily, as I pointed out above, our Writer for position information never fails, so we cannot possibly generate additional errors for the client.

I could end the post here. But I don't want to sweep under the rug that there's a little cheating going on. Where? Well, you see, TeeReader is not a ByteReader either. So regardless of how nice the code in our Writer is, and regardless of how cute the setup above is, we incur the cost of an extra indirection when NewDecoder decides to wrap the TeeReader. What we're doing is shoving the problem back to the standard library. It's possible that TeeReader will eventually grow a ReadByte method at which point the needless wrapping would cease. However, that's not very likely given what TeeReader is designed to do. But note that this concern arises specifically in connection with encoding/xml. There are probably many applications that do not require methods beyond the Reader interface.

Speaking of other applications. In the Go ecosystem, interfaces such as Reader and Writer are extremely prominent. Lots of people write their code to take advantage of them. The nice thing is that streampos.Writer coupled with TeeReader provides a generic way to handle position information for all applications that use a Reader to grab textual data. Of course not all applications do, and not all applications will be able to take full advantage of it. But if you're writing one that does, and if you want to have position information for error messages, well, it's three lines of code as long as you already track offsets. And you have to track something yourself because after all only your application knows what parts of a stream are interesting.

I very much like that Go encourages this kind of reusability by composing small-ish, independently developed pieces. (Actually, that even confirms a few of the claims I made in my 2003 dissertation. Yay me!) The only "trouble" is that there are already a few places in the standard library where similar position tracking code exists: At the very least in text/scanner and in the Go compiler itself. Whether that code could use my little library I don't know for sure, maybe not. But I guess it should be a goal of the standard library to refactor itself on occasion. We'll see if it does...

One last note: I've been teaching a course on compilers since 2001, and since about 2003 I've told students to use byte offsets as their model of positions. I've always sold this by explaining that offsets can be turned into lines and columns later but we don't have to worry about those details in the basic compiler. Strangely enough I never actually wrote the code to perform that transformation, until now that is. So once I teach the course mostly in Go, I can use my own little library. Neat. :-)

Wednesday, April 20, 2016

Terminal Multiplexers: Simplified and Unified

I have a love-hate relationship with terminal multiplexers. I've been using screen for years, but mostly on remote servers, and mostly just to keep something running on logout and later get back to it again. But on my home machine or my laptop, I've avoided terminal multiplexers like the plague, mostly because of their strange (or is "horrid" more appropriate?) user interfaces.

For a really long time now, I've simply used LXTerminal and its tabs as a crutch, but I've recently grown rather tired of that approach. When you're writing (more or less complicated) client/server software, it really pays off to have both ends running side-by-side: switching tabs, even with a quick key combination, gets old fast. Also LXTerminal lacks quite a few features, true-color among them. What I really wanted to use was st, but that lean beast doesn't even have a scrollback buffer (so forget about tabs or a contextual menu).

Wait, so why don't I just use a tiling window manager like dwm and open several terminals? Sadly I've been a spoiled Openbox person since 2008 and a spoiled "overlapping windows" person since 1987 or so (thank the Amiga for that). I like the idea of a tiling window manager exactly for a bunch of terminals and not much else. In the long run I may actually become a dwm nut, but not just yet.

So I had to face it, the time was right to actually learn a terminal multiplexer for real. But which one? For text editors there's an easy (if controversial) answer: just learn vi or emacs, both of those you're likely to find on any UNIX system you may ever have to work with. (Heck even busybox has a vi clone.) That seems to suggest that I should spend time on learning screen for real: It's the oldest terminal multiplexer out there, so it's most likely to be available just about everywhere.

The only problem is that it sucks. If you want to see why it sucks, just start htop in screen. I don't care if that's a terminfo problem or not, in screen the htop interface looks messed up. It's still usable, but you know, the eyes want to be pleased as well (yes, even in the terminal). But it sucks even more: Just split the terminal and then run cat </dev/urandom in one of the splits. Chances are you'll get random garbage spewed all over the other split as well. Doesn't inspire much confidence in how well-isolated those splits are, does it?

So of course I tried tmux next, a much more recent project and maybe "better" because it doesn't have as much buggy legacy code. Sadly it immediately fails the htop test as well, but at least it does a little better when hit with the random hammer: No more spewing into the other split, but still the status line gets trashed and once you stop hammering some of the UI elements are just a little out of it. A little groggy most likely?

One more attempt, let's try dvtm instead. If you don't know, that's a dwm clone inside the terminal. And wow, it passes both the htop test and the random hammer with flying colors, only a few of the border elements get trashed. On the downside, it's least likely to be installed on a random UNIX system you need to work on. That, and it has a bunch of opinionated layout assumptions that may not be to your liking. I, however, like them just fine, at least for the most part.

At this point I started asking myself what I actually need in my terminal multiplexer, and I arrived at a rather short list of features:

  • create a new terminal window with a shell running in it (splitting the terminal area horizontally or vertically or at least sanely)
  • switch from one terminal window to another quickly and reliably
  • destroy an existing terminal window and whatever is running in it

That's really it as far as interactive use on my home machine or laptop is concerned. Sure, being able to detach and reattach sessions later is great, especially when working remotely. But for that use-case I already have screen wired into my brain and fingers. (Also it turns out that dvtm doesn't persist sessions in any which way, so I'd need to use another tool like dtach or (more likely) abduco with dvtm.)

So what am I to do? Which one of these am I to learn and put deep into my muscle memory over the next few weeks? That's when it struck me: I can probably learn none of them and all of them at the same time! After all, I have very few requirements (see above) and as luck would have it, all of the tools are highly configurable. Especially in regards to their keybindings! Can it be done?

Can I configure all three tools in such a way that one set of keybindings will get me all the features I actually need?

Let's see what we find in each program for each use-case, then try to generalize from there. Let's start with screen:

  • CTRL-a-c create a new full-sized window (and start a shell)
  • CTRL-a-n switch to the next full-sized window
  • CTRL-a-p switch to the previous full-sized window
  • CTRL-a-S split current region horizontally (no shell started)
  • CTRL-a-| split current region vertically (no shell started)
  • CTRL-a-TAB switch to the next region
  • CTRL-a-X remove current region (shell turns into full-sized window)
  • CTRL-a-k kill current window/region (including the shell)
  • CTRL-a-d detach screen

So obviously the concepts of window and region are a bit strange, but for better or worse that's what we get. I am pretty sure that regions (splitting one terminal into several "subterminals") was added later? In terms of my uses cases it's a bit sad that creating a new region does not also immediately launch a shell in it, instead I have to move to the new region with CTRL-a-TAB and then hit CTRL-a-c to do that manually. It's also strange that removing a region doesn't kill the shell running in it but turns that shell into a "window" instead. And of course navigating windows is different from navigating regions. Let's see how tmux does it:

  • CTRL-b-c create new full-sized window (and start a shell)
  • CTRL-b-n switch to next full-sized window
  • CTRL-b-p switch to previous full-sized window
  • CTRL-b-" split current pane horizontally (and start a shell)
  • CTRL-b-% split current pane vertically (and start a shell)
  • CTRL-b-o switch to next pane
  • CTRL-b-UP/DOWN/LEFT/RIGHT switch to pane in that direction
  • CTRL-b-x kill current pane (as well as shell running in it)
  • CTRL-b-& kill current window (as well as shell running in it)
  • CTRL-b-d detach tmux

So we can pretty much do the same things, but not quite. That's annoying. Note how creating a new "pane" launches a shell and how killing a "pane" kills the shell running in it? That's exactly what screen doesn't do. Also note that tmux has a notion of "direction" with regards to panes, something completely lacking in screen. And note that "windows" are still different from "panes" in terms of navigation. Finally, what's dvtm like?

  • CTRL-g-c create a new window (and start a shell, splitting automatically)
  • CTRL-g-x close current window (and kill the shell running in it)
  • CTRL-g-TAB switch to previously selected window
  • CTRL-g-j switch to next window
  • CTRL-g-k switch to previous window
  • CTRL-g-SPACE switch between defined layouts
  • CTRL-\ detach (using abduco, no native session support)

Note that dvtm's notion of "window" unifies what the other two tools call "window" (full-sized) and "region" or "pane" (split-view). But the biggest difference is that dvtm actually has an opinion about what the layout should be. You can choose between a few predefined layouts (with CTRL-g-SPACE) but that's it. One of these layouts shows one "window" at a time, but the navigation between "windows" stays consistent regardless of how many you can see in your terminal.

So what's the outcome here? Personally, I very much prefer what dvtm does over what tmux does over what screen does. Obviously your mileage may vary, but what I'll try to do here is make all the programs behave (more or less, and within reason) like dvtm.

Of course there are plenty of issues to resolve. For starters, two of the programs care about whether a split is vertical (top/bottom) or horizontal (left/right) while one doesn't. The simplest solution I can think of is to have separate key bindings for each but to map them to the same underlying command in dvtm. This way I'll learn the key bindings for the worst case while enjoying the features of the best case.

Next screen doesn't automatically start a shell in a new split while the other two do. Luckily that's easy to resolve by writing new and slightly more complex key bindings. There are a few additional issues we'll get into below when we talk about how to configure each program, but first we need to "switch gears" as it were: It's time to do some very basic user interface design.

Let's face the most important decision first, namely which "leader key" should I use? We have CTRL-a, CTRL-b, and CTRL-g as precedents, but we don't necessarily have to follow those.

I played with various keys for a while to see what's easiest for me to trigger. My favorite "leader keys" would have to be CTRL-c and CTRL-d. Sadly, those already have "deep meaning" for shells, so I don't want to use either of them. The next-best keys (for my hands anyway) would be CTRL-e and CTRL-f. Sadly, CTRL-e is used for "move to end of line" in bash and I tend to use that quite a bit; screen's CTRL-a sort of shares that problem. In terms of "finger distance" I also find CTRL-a "too close and squishy" and CTRL-b/CTRL-g "too far and stretchy" for myself.

Based on this very unscientific analysis, I ended up picking CTRL-f as the "unified leader key" for all three multiplexers. The drawback of this is that I'll train my muscle memory for "the wrong key" but between three programs, any key would have been "the right key" only for one of them anyway. Big deal. (Yes, in bash CTRL-f triggers "move one character forward" but that's not really a big loss either, is it?)

With the leader settled, how shall we map each use-case to actual key strokes? The guiding principle I'll follow is to rank things by how often I am likely to do them. (Of course that's just a guess for now, if it turns out to be wrong I'll adapt later.) I believe that the ranking is as follows:

  1. Navigate between splits.
  2. Create a new split.
  3. Remove the current split.

If that's true, then moving to the next split should be on the same key as the leader, so the sequence would be CTRL-f-f. That even works mnemonically if we interpret "f" as "forward" or something. Now let's find keys to create splits. At first I thought "something around f" would be good, the idea being that I'd hit the key with the same finger I used to hit "f" a moment before. But that's actually slower than hitting a key with another finger on the same hand. The way I type, I hit "f" with my index finger and my middle finger naturally hovers over "w" in the process. So CTRL-f-w it is, after all that's again a mnemonic, for "window" this time. But what kind of split should it be, horizontal or vertical? I settled on horizontal because in dvtm's default layout, the first split will also be horizontal (left/right). So I'll get some consistency after all. The other key that's quick for me to hit is "e" so CTRL-f-e shall be the vertical (top/bottom) split. That leaves removing the current split, and for that CTRL-f-r is good enough, again with some mnemonic goodness. Here's the summary:

  • CTRL-f-f switch to next split
  • CTRL-f-w split horizontally (and start a new shell)
  • CTRL-f-e split vertically (and start a new shell)
  • CTRL-f-r remove current split (and terminate the shell)

Sounds workable to me. All that remains is actually making the three programs behave "similarly enough" for those keystrokes. As per usual, we'll start with screen. The file to create/edit is ~/.screenrc and here's what we'll do:

hardstatus ignore
startup_message off
escape ^Ff
bind f eval "focus"
bind ^f eval "focus"
bind e eval "split" "focus" "screen"
bind ^e eval "split" "focus" "screen"
bind w eval "split -v" "focus" "screen"
bind ^w eval "split -v" "focus" "screen"
bind r eval "kill" "remove"
bind ^r eval "kill" "remove"

I'll admit right away that my understanding of "hardstatus" is lacking. What I am hoping this command does is turn off any status line that would cost us terminal real estate: I want to focus on the applications I am using, not on the terminal multiplexer and what it thinks of the world. The "startup_message" bit just makes sure that there's no such thing; many distros disable it by default anyway, but for some strange reason Ubuntu leaves it on. (I am all for giving people credit for their work, but that message requires a key press to go away and therefore it's annoying as heck.)

The remaining lines simply establish the key bindings described above, albeit in a somewhat repetitive manner: I define each key twice, once with CTRL and once without, that way it doesn't matter how quickly I release the CTRL key during the sequence of key presses. (If you have a more concise way of doing the same thing, please let me know!)

We should briefly look at what we lose compared to the default key bindings. Our use of "f" overrides "flow control" and luckily I cannot think of many reasons why there should be "flow control" these days. Our use of "w" overrides "list of windows" but since I intend to mostly use splits that's not a big problem. Our use of "e" comes for free because it's not used in the default configuration. Finally, our use of "r" overrides line-wrapping, something I don't imagine caring about a lot. So we really don't lose too much of the basic functionality here, do we?

Next we need to configure tmux. The file to create/edit is ~/.tmux.conf and here's how that one works:

set-option -g status off
set-option -g prefix C-f
unbind-key C-b
bind-key C-f send-prefix
bind-key f select-pane -t :.+
bind-key C-f select-pane -t :.+
bind-key w split-window -h
bind-key C-w split-window -h
bind-key e split-window
bind-key C-e split-window
bind-key r kill-pane
bind-key C-r kill-pane

Different syntax, same story. First we switch off the status bar to get one more line of terminal real estate back. Then we change the leader key to CTRL-f and define (repetitively, I know) the key bindings we settled on above. Easy. (Well, except for figuring out the select-pane arguments, I need to credit Josh Clayton for that. Yes, it's in the man page, but it's hard to grok at first.)

What do we lose? By using "f" we lose the "find text in windows" functionality, something I don't foresee having much use for. By using "w" we lose "choose window interactively" which seems equally useless. Luckily "e" is once again a freebie, a key not used by the default configuration. Finally, by using "r" we lose "force redraw" which is hopefully not something I'll need very often. Seems alright by me!

Last but not least, let's configure dvtm. In true suckless style there is of course no configuration file because that would "attract too many users with stupid questions" and the like. (Arrogance really is bliss, isn't it? Let me just say for the record that I really appreciate the suckless ideals and their software. But that doesn't change the fact that configuration files are convenient for all users (not just idiots!) who don't want to manually compile every last bit of code on their machines. But I digress...) So we have to edit config.h and recompile the application, which in Gentoo amounts to (a) setting the savedconfig USE flag, (b) editing the file

/etc/portage/savedconfig/app-misc/dvtm-0.14

and then (c) re-emerging the application. Here's the (grisly?) gist of it:

#define BAR_POS BAR_OFF
#define MOD CTRL('f')
...
{{MOD, 'f',}, {focusnext, {NULL}}},
{{MOD, CTRL('f'),}, {focusnext, {NULL}}},
{{MOD, 'w',}, {create, {NULL}}},
{{MOD, CTRL('w'),}, {create, {NULL}}},
{{MOD, 'e',}, {create, {NULL}}},
{{MOD, CTRL('e'),}, {create, {NULL}}},
{{MOD, 'r',}, {killclient, {NULL}}},
{{MOD, CTRL('r'),}, {killclient, {NULL}}},

There are a few more modifications I didn't show, but just to remove existing key bindings that conflict with ours. Which brings us to the question what we lose. Our use of "f" costs us the ability to choose one specific layout, something I can live without for sure. Our use of "w" is a freebie, it's unused by default. Our use of "e" costs us "copymode" which, so far, I didn't need; eventually I may have to revisit this decision and maybe remap the functionality elsewhere. Finally, our use of "r" costs us being able to "redraw" the screen, something I hope I won't need too much.

Wow, what a project. Had I known I'd spend about 10 hours on learning all the relevant things and writing this blog post, maybe I would not ever have started. But now that it's all done, I am actually enjoying the fruits of my labor: Splitting my terminal regardless of what ancient UNIX system I am on? Using fancier tools on newer machines, including my own? Debugging client/server stuff with a lot less need to grab the mouse and click something? It's paradise! Well, close to it anyway...

Update 2016/04/23: My "unified" configuration files are now available on github.com if you want to grab them directly.

Wednesday, March 2, 2016

Revision Control Matters

I am currently teaching our project course in video game design again. I decided early on that all student teams should use a revision control system, specifically git through Bitbucket, to coordinate their work. Sadly I did run into some resistance regarding this requirement with many students stating that Dropbox and Google Drive are much more convenient for them. So I thought I'd ask around among my game development friends for opinions, and I got a few. But first let me paraphrase my "official" question to them:

In your esteemed opinion, how important is it for people working on a video game together to be able to use a revision control system to coordinate their work? How many of you have, in your gaming work, been able to get away without using one? Do you think it's less important for artists? How about artists who not only produce art (graphics, sound, etc.) but also write code/scripts?

The answers below are in no particular order. For some I've cleaned up the grammar a little. I've also "anonymized" the people involved because I wasn't sure how they'd like being quoted in public. But rest assured that they are all experienced software developers who've done at least several gaming projects. Let's start with JB:

This is beyond question. Just no question. ... I would never work without a revision control system, even working by myself. Working with a team, running the committed code through automated tests every night, (yelling at the person who broke the code is beyond satisfying), staying in sync, knowing that everything everyone is doing will continuously still work together—mandatory. You will encounter crunch-time bugs that can only be found by rolling back trough revisions until the bug doesn't exist. And you want to be creative and unlimited? Branching ... so you can try out something radical is priceless! ... You will learn why you need revision control systems eventually, no matter how thick you are. But at the end of the day, you want to work for a good company with excellent people, and they aren't going to want you unless you are on board with revision control systems. End of story. It's a career requirement. So suck it up buttercup.

Next we have AD with a slightly different opinion:

I would not force students who are resisting. Things like Dropbox provide data sharing which is most of what 1–2 programmer teams need. If they lose their code you can always make fun of them. Personally I always use revision control but my games are very code-centric and I came into game development with a software engineering background. One of the things I like about games is that nobody cares about code or tools—only results. Most software engineers are evaluated by other software engineers and a lot of quasi-religious groupthink comes out of that. Game development is a reality check: Can I really use these software skills to produce something normal people want? Software engineering classes are a better place to force students to learn about revision control, the game development kids should focus on making fun games with minimum friction.

I should say that the student teams in our course are 4–5 students, not 1–2 as AD had assumed. And please note that AD still holds students responsible in case their ad-hoc approach fails. Next we have DC:

For the past few weeks I've been working on a simulator entirely on my own and I'm still kicking myself for not using version control: I've noticed a change in the outputs in the last couple days that is inexplicable and I don't have a recent snapshot I can go back to to track down what caused it. Certainly I've never worked at a game company, or heard about any acquaintances doing so, that failed to use version control. It's a universal standard. Should it be required of the students? Arguments in both directions: (a) using it gets them experience with an industry standard; (b) not using it will force them to confront the problems that crop up, and deeply understand why they want version control in the future. ... I usually side with (a) although I was just reading a part of the JHU orientation which ... emphasizes students learning more on their own.

It's true that we try to encourage "learning on your own" and I guess my courses are particularly "infamous" in that regard. However, I would still hold that it's appropriate to force students to do it, in a company environment they most likely would be "forced" as well. But hey, now I am throwing too much of my own perspective in here. Let's get to AS:

It's a requirement!

That's certainly the most succinct reply I received. So on to SC:

I'd say it's a requirement for the code base at the very least. Git or Perforce can be punishing with Unity files, but it's ultimately best. Art I'm more flexible with. An old job tried to handle art with Git and it was bad. Currently our artists maintain stuff on their own Perforce server (since it's unlikely that more than one artist is working on any given asset) and then upon completion things move to the Unity repository.

The point here is that art assets are hard to merge automatically and so a system that allows "locking" is preferable over one that forces merging. Note, however, that there's still a revision control system in place, albeit a different one. I don't have a way to give students Perforce licenses, but maybe Fossil 2.0 will be an option in the future. Of course Subversion supports locking as well, it just feels slightly antiquated these days. Next we have BR:

I was able to get away without one... In Nineteen Ninety Freaking Six!

Speaks for itself, doesn't it? On to NMC's opinion:

Sometimes, for a small project with only one person working on it, you can get away just periodically taking your entire project folder and compressing it to a zip file. This is barely adequate for one person working alone on a recreational project, and even in that situation version control would make your life easier.

I think I am starting to see a pattern here, don't you? Finally we have JC (who refers back to BR above):

Is it essential for every project? No. Is it useful for every project? Yes. A career hallmark of game development (and maybe the tech industry itself) is enjoying learning new things constantly. There is always a better way. Either you love that or you think you've learned enough. The people who think they know enough don't tend to last or have passed into a phase of their career that likely won't last in a satisfying way. Our current view is Git (or Hg) for code and Perforce for content. They just make life easier. For the size of Unity projects you will encounter in college, Git (or Hg) alone is totally fine.

Git is something that everyone hates and don't understand if they use it rarely. If they use it regularly, they usually love it. It's crazy fast and lets you do all the things you want to do. Perforce is nice in that it is rock-solid and handles large binaries well. But you also lose information everytime you merge (unlike a distributed system) and branching is painful, particularly after the fact. And, of course, it isn't free ...

I probably agree with BR: let them do whatever but no excuses accepted for lost work. That is the reality. But it's definitely a plus for us as a company when an entry-level candidate has real experience with the major source control systems. For one, it's just a skill like anything else. But, second, it gives some indication that they've done enough work to understand the value.

On a side note: BR "got away with it" until I was hand-merging changes from him and SM every day ... in addition to writing the core game systems. To his credit, BR fully supported the change. And, agh, we used SourceSafe. That I would not recommend.

Seems to me that this (biased?) sample of opinions tends to agree with my feeling that every student should learn how to work with a revision control system. So I for my part feel "vindicated" as it were.

If you're a student in one of my project courses and you hate the idea of learning a revision control system, re-read these comments from industry professionals a few times. Then ask yourself if you'd rather be able to say, truthfully, that you learned something like git or if you'd rather apologize that you didn't. Your call, but I am almost certain that your answer will have an effect on your chances of getting hired.

(I'd like to thank everybody who answered my question back when I posted it on Facebook. You're the best!)

Sunday, February 21, 2016

What to do with a new disk?

Today two replacement SATA disks arrived from my favorite supplier. Reason enough to briefly summarize what I do when I get fresh disks: Maybe someone else can learn from the DOA mistakes of my youth when I trusted that a new disk would just work only to find that when I needed it, all it would do is "click click click" and that was that.

If you go through more disks than the average person, for example because you run a bunch of RAID arrays, I would first recommend that you get yourself a suitable docking station. Here's what I use:

UNITEK Dual Bay USB Docking Station
UNITEK Dual Bay USB Docking Station

I got mine from newegg.com of course. There are plenty of alternatives, little USB-to-SATA adapters or hotswap bays that mount in your machine's case, but none of those beat a decent docking station for convenience and versatility. (Note that I never use the "clone" feature of that thing, although I hear that it works fine.)

So unpack your new disks and do a quick physical inspection. If your supplier is decent at all, the packaging will be so good that it's extremely unlikely that you'll get something that's mechanically broken on the outside, so a glance is usually enough. Then slap them into your docking station and power it up. Open a terminal and do a quick check with dmesg:

[51407.603023] usb 1-1: new high-speed USB device number 4 using ehci-pci
[51407.718674] usb 1-1: New USB device found, idVendor=152d, idProduct=2551
[51407.718678] usb 1-1: New USB device strings: Mfr=1, Product=11, SerialNumber=3
[51407.718679] usb 1-1: Product: USB Mass Storage
[51407.718681] usb 1-1: Manufacturer: JMicron
[51407.718682] usb 1-1: SerialNumber: 00000000000000
[51407.719191] usb-storage 1-1:1.0: USB Mass Storage device detected
[51407.719353] scsi host6: usb-storage 1-1:1.0
[51408.142938] usbcore: registered new interface driver uas
[51409.223243] scsi 6:0:0:0: Direct-Access     HDD                       0000 PQ: 0 ANSI: 2 CCS
[51409.224849] scsi 6:0:0:1: Direct-Access     HDD                       0000 PQ: 0 ANSI: 2 CCS
[51409.225220] sd 6:0:0:0: Attached scsi generic sg6 type 0
[51409.225355] sd 6:0:0:1: Attached scsi generic sg7 type 0
[51409.229108] sd 6:0:0:0: [sdf] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[51409.229476] sd 6:0:0:1: [sdg] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[51409.230360] sd 6:0:0:0: [sdf] Write Protect is off
[51409.230365] sd 6:0:0:0: [sdf] Mode Sense: 28 00 00 00
[51409.231358] sd 6:0:0:1: [sdg] Write Protect is off
[51409.231363] sd 6:0:0:1: [sdg] Mode Sense: 28 00 00 00
[51409.232350] sd 6:0:0:0: [sdf] No Caching mode page found
[51409.232355] sd 6:0:0:0: [sdf] Assuming drive cache: write through
[51409.233613] sd 6:0:0:1: [sdg] No Caching mode page found
[51409.233616] sd 6:0:0:1: [sdg] Assuming drive cache: write through
[51409.286473] sd 6:0:0:0: [sdf] Attached SCSI disk
[51409.287473] sd 6:0:0:1: [sdg] Attached SCSI disk

Alright, looks like both disks are there having been recognized when the docking station powered up. Good! Now go ahead and check the details with smartctl:

# smartctl -i /dev/sdf -d sat
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.1.12-gentoo] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 7200.14 (AF)
Device Model:     ST1000DM003-1SB10C
Serial Number:    Z9A0GYZ0
LU WWN Device Id: 5 000c50 08774c950
Firmware Version: CC43
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sat Feb 20 17:34:31 2016 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
# smartctl -i /dev/sdg -d sat
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.1.12-gentoo] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Blue
Device Model:     WDC WD10EZEX-00WN4A0
Serial Number:    WD-WMC6Y0F4UPT5
LU WWN Device Id: 5 0014ee 0aec80cac
Firmware Version: 01.01A01
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sat Feb 20 17:34:57 2016 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Good! Notice that I had to use the "-d sat" option to tell smartctl that there's really a SATA drive hiding behind all the USB stuff. (Took me a while to realize that I can do that, I used to think that SMART just doesn't work at all over USB.)

What you want to be looking for is the "SMART support is:" line. It's almost universally true today that SMART will be enabled by default, unlike back in 2002. But it's still good to check. In case it's not enabled, enable it. In case your disk doesn't support SMART at all, well, why did you order it? To enable SMART you'd say something like

# smartctl -s on /dev/sdf -d sat

but again, hopefully you won't have to. Alright, after all this prep work, we finally get to the point of all this: You want to run the basic SMART tests that all modern drives support. Note that especially the long test can take a really long time, so do this when you're sure you won't need the docking station for something else. First run the short tests:

# smartctl -t short /dev/sdf -d sat
# smartctl -t short /dev/sdg -d sat

Yes, you can easily run these in parallel because the disk is doing its own testing, your machine only told it to get going. For a 1 TB disk, the short test takes about a minute, but if you're impatient, you can check on the progress of the test as follows:

# smartctl -a /dev/sdf -d sat
...
=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 7200.14 (AF)
...

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
...

Self-test execution status:      ( 246)    Self-test routine in progress... 60% of test remaining.
...

There's a lot more output than that, I just put "..." instead to keep things simple. (Actually you can get even more output with -x instead of -a if you really want.) After waiting for your minute, you can check on the outcome of the test with the same command. Toward the bottom of the output you'll hopefully find a line like the following:

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error

# 1  Short offline       Completed without error       00%         0         -

This indicates that the short test succeeded. Check the other disk as well, then get ready for the long test:

# smartctl -t long /dev/sdf -d sat
# smartctl -t long /dev/sdg -d sat

Same procedure as before, except that this time you can expect to wait for about two hours for a 1 TB disk. Hopefully the long test also works out just fine.

And there you have it, the minimum amount of testing I do on replacement disks these days before I put them on the shelf as I wait for a RAID array to fail. Of course if you have plans to encrypt the data on these disks you can do more "testing" by filling them up with random data now before you shelf them away.

Wednesday, February 3, 2016

Tired of Bernie Detractors

The recent uptick in so-called "liberal" commentators telling people who want to vote for Bernie Sanders why they shouldn't and why Hillary Clinton is the better candidate has me moderately pissed-off. It's true, you can never change anyone's mind on the Internet, but it's been so long since my last political post on this blog that I figured I should take a shot.

One of the first things you'll hear critics say is that Bernie stands for "wide-eyed idealists" which is a nice way of saying "idiot children who have no clue" in what passes for political discourse here in the US. Hillary of course stands for the "hard-eyed realists" which is to say "smart grown-ups who know the system" or whatever other "serious" thing. And heck, it may well be true that Bernie is an idealist and Hillary is not. What you should question, however, is if the valuation implied in most "think pieces" on this topic is correct: Is it actually true that a "realist" president is better for the country than an "idealist" president?

Think, for example, about how you haggle at a flea market or a garage sale. If the seller wants $10 for their lamp and you want to pay $5 for it, do you offer $9? Of course not! You offer $2 or, if you're feeling nice, maybe $3. You can't offer $1 because you know for sure that the seller will shrug their shoulders and wave you on. But offering $9 right away is actually an insane proposition, isn't it? Would you really "haggle" like that?

Yet this is what so-called "experts" bring up as a "plus" for Hillary: Since she's "realist enough" to offer $9, she might actually be able to get that for us. Big surprise there! If you propose to replace, say, really bad inequality with ever-so-slightly less bad inequality, you're going to find quite a few multi-billionaires who'll say "Alright, maybe I'll give up a few million here or there, at least I'll get to keep my head on my shoulders." and then you can give a State of the Union address that sells this pile of garbage as a huge success.

But remember what you wanted to pay: $5! Bernie might actually be "idealist enough" to offer $3 and then achieve $6 after some back and forth. Perfect? Probably not. But without someone actually haggling you'll just end up with $9. Or maybe $11: After all, in politics, unlike at a semi-sane garage sale, the "other side" has already raised their expectation to $13 between the time they told you $10 and the time they hear your $9 offer. That would make it even more important to have an "idealist" in office, wouldn't it?

Another thing you'll hear bandied about a lot is that Bernie will simply be blocked by Congress regarding every single proposal his administration might make. Once again this might be true, maybe Congress really would be more adversarial toward Bernie than toward Hillary. But again you should carefully think about the implications of this so-called "argument": If it's true that President Bernie can be blocked by Congress, isn't it also true that President Hillary can be blocked? Or President Donald?

Note that it's irrelevant whether President X will actually be blocked. That's simply one for the history books because you can only know it in retrospect. Indeed, anyone who is "predicting" it as a certainty for Bernie now is simply being dishonest. Politics is a dynamic process after all, and Bernie might be able to play his cards in such a way that Congress will eventually go along. (Maybe by starting to haggle at $3 instead of $9?)

Furthermore most of the "he'll be blocked" folks seem to completely forget that the President is not powerless in a fight with Congress: The veto allows a president to simply stop legislation coming out of Congress. That's not always easy, but it's clearly the case that a "mean Bernie" might be able to veto enough stuff to really make the legislative sweat. Don't forget that representatives and senators come from certain states, and if their states don't get something because of a presidential veto, the people responsible might find their incumbancy in grave danger. True, there might also be some "collateral damage" because "normal people" in those states might not get something they really need. But if the goal is to fix the whole country and not just a state here or a state there, well, that might be a sacrifice some of us are willing to make.

And of course Congress is not the only thing that can "block" a president. The "virtual senate" of bankers and traders around the world can achieve much the same simply by shifting capital around in such a way as to hurt a country until some policy (whether proposed by President or Congress) is "off the table" again. That's in fact in large part how "neoliberal austerity" works in places that are not officially beholden to Washington's machinery of World Bank and International Monetary Fund. (Hillary actually has experienced that first-hand back during Bill Clinton's first term when they tried to pass a semi-decent healthcare bill but were promptly shut down by Wall Street.)

As Iowa showed yesterday, Bernie has some real momentum. He also has better policies (for the 99% that is) regarding almost everything. Hillary will be "more of the same" just like her husband was (for the 1% that is). I cannot for the life of me imagine how the country could be "worse off" with Bernie than with any of the alternatives. But I can see plenty of things that could be better with Bernie. I am a card-carrying skeptic and cynic of course: It might turn out that Bernie is also "more of the same" in the end, who knows? What we know for sure is that with Donald or Hillary we're guaranteed that nothing gets better. With Bernie we at least have a shot. I am willing to take that chance, and I am tired of people who smear Bernie with "arguments" that are none.

Wednesday, January 13, 2016

Basement Delving: Teenage Mutant Ninja Turtles

I have no idea why I bought all of these except that they were on a superb sale back in the day. I did like the old Palladium RPG as well as Rifts to some extent, but I never really was into TMNT regardless of context. Anyone out there in the greater vicinty of Munich who's interested in getting a hold of these?













Simple Hit Locations

I have a house rule that says "once a character is below 0 hit points there's a chance for a permanent injury/consequence" but I always winged it (or just ignored my own rule) in the heat of things. Now I decided that I want a "formally written-down rule" to resolve these injuries which at the very least requires a way to determine where the injury is located. Sure, in D&D we usually avoid this level of detail, but since this mechanic is not used during combat (as a critical hits chart would) I think it's okay.

In any case, what is a useful distribution of hit locations? As per usual I started by looking at what others had done before. I played RuneQuest as well as Warhammer back in the 1980s so I looked at those as well as Legend, a recent reincarnation of RuneQuest. I also acquired Flashing Blades and Aftermath! from Fantasy Games Unlimited last year and while Flashing Blades is quite sane and included here I skipped the crazy that is Aftermath. I am sure that there are many more systems out there, but I hope four of them are enough for a first approximation.

As opposed to Aftermath, all the systems I discuss here are fairly coarse-grained. Instead of giving you "shoulder" and "upper arm" and "elbow" and "lower arm" and "hand" they just give you "arm" and that's it. I happen to think that a lower resolution is preferable both because it keeps the table concise and because it is a lot more difficult to judge how "realistic" the percentages are if many fine-grained hit locations are given.

Among the systems examined Warhammer is the only one that combines "chest" and "abdomen" into "body" and is therefore even more coarse-grained. Besides those areas, all systems cover "head" and "arms" and "legs," the only difference is the percentages they assign to each area. All systems except for Flashing Blades distribute hits evenly between the left and right sides of a defender; Flashing Blades assumes that a right-handed defender is more likely to get hit in the right arm than the left arm and vice versa. Only RuneQuest makes a distinction between hit locations for melee combat and missile fire. Finally, all systems except for Warhammer use a d20 to determine hit locations; Warhammer uses a d100 instead, but the ranges still come in 5% increments and we can therefore map it back onto a d20 roll as well. We end up with the following:

LocationFlashing Blades (Right)Legend (Mongoose)RuneQuest (Melee)RuneQuest (Missile)Warhammer (1st edition)
Head10% (1-2)10% (19-20)10% (19-20)5% (20)15% (1-3)
Right Arm15% (3-5)15% (13-15)15% (13-15)10% (16-17)20% (4-7)
Left Arm10% (11-12)15% (16-18)15% (16-18)10% (18-19)20% (8-11)
Abdomen20% (13-16)15% (7-9)15% (9-11)20% (7-10)25% (12-16)
Chest25% (6-10)15% (10-12)5% (12)25% (11-15)25% (12-16)
Right Leg10% (17-18)15% (1-3)20% (1-4)15% (1-3)10% (17-18)
Left Leg10% (19-20)15% (4-6)20% (5-8)15% (4-6)10% (19-20)

Sorry, not exactly easy to read. Let's look at what Warhammer calls "body" first, so "abdomen" and "chest" in terms of the table. The systems vary widely in what they consider appropriate: Flashing Blades as well as RuneQuest (for missiles) assign 45% of hits here, Legend 30%, Warhammer 25%, and RuneQuest (for melee) only 20%. Among the systems that distinguish "abdomen" from "chest" there is little agreement on what's more likely to get hit either, except (once again) in the case of Flashing Blades and RuneQuest (for missiles). I find this rather disturbing, I would have thought that we'd see at least some consistency here.

There is a little less variation when it comes to the extremities, alas it's still quite significant. Flashing Blades and Warhammer consider leg hits rather unlikely with a 20% total, RuneQuest (for melee) thinks they are quite likely with a 40% total. On the other hand Warhammer tells us that arm hits are very common with 40% when most of the other systems assign 25%-30% instead. In general Warhammer seems "top heavy," Flashing Blades and RuneQuest (for missiles) seem "body focused," and RuneQuest (for melee) is a little "bottom heavy" as it were. Legend spreads things out most evenly.

Let's not forget what we're looking for though: I want a roll that happens after combat when a character is already "out" because they reached 0 hit points. I don't want to assume that the "last blow" is the one the actually leads to that "grievous injury" as it were, it could have been any of the preceeding hits as well. So I don't really need those little differences that Flashing Blades and RuneQuest support regarding handedness and melee versus missile. Indeed, since my consideration is cumulative over the last couple of hits, I should probably "aim" for a more evenly distributed system. Thus Legend should provide a useful starting point.

Once we settle on an even distribution, we can reconsider the die roll. Going for Warhammer's approach ("body" instead of "abdomen" and "chest") a d6 would suffice, otherwise we need a d8 to cover the seven possible locations. Of course that leaves another result we need to find a use for. Here is the first suggestion:

Locationd6
Head16.7% (1)
Right Arm16.7% (2)
Left Arm16.7% (3)
Body16.7% (4)
Right Leg16.7% (5)
Left Leg16.7% (6)

Alright, not too bad. Of course I can already hear many of you complain that "head" is too frequent compared to "body" here, but that's simply the price we pay if we go for an even distribution. Here's the straightforward alternative:

Locationd8
Head12.5% (1)
Right Arm12.5% (2)
Left Arm12.5% (3)
Abdomen12.5% (4)
Chest12.5% (5)
Right Leg12.5% (6)
Left Leg12.5% (7)
?12.5% (8)

What to do with the "leftover" result? Thinking ahead a bit, the location of an injury is actually not enough, we also need a measure of "severity" for the entire thing to work. So one option would be to assume that a "regular hit" just leaves a scar at the indicated location whereas rolling an 8 would indicate a more serious problem (re-roll to find the location). Another option would be to add another "location" of sorts, maybe "internal" or "systemic" injury?

Now I have to admit that I already have a rough idea about the "severity" part, I'll probably make that some version of the 2d6 reaction roll eventually. With that in mind, the d8 version above seems less attractive: A different die to roll and that "leftover" category. So I have a feeling I'll end up with the d6 table using the Warhammer-style hit locations with a Legend-style distribution. (That's why I ended up calling this post "Simple Hit Locations" after all.) Opinions?