Sunday, July 23, 2017

Protecting Weaker Party Members

I am currently running a B/X game that started with the sentence "We've all decided to play magic-users!" from one of the players. Luckily my adopted variant of "3d6 in order" prevented that outcome and they're a pretty typical "murderhobo" party now: two magic-users, two fighters, one thief, and one blood-thirsty halfling with suspenders but no shirt. (There was a cleric, but that's another story.)

In any case, one of the players was "completely new" and of course I recommended that she play a fighter. I sold this mostly with "it's the easiest class to run" but also with "all those robes-and-silly-hats people will need protection" which is what me got thinking:

There is no "rule" in B/X that would allow a fighter to protect a magic-user or anyone else for that matter.

Seemed like something important to offer at the time, but looking around the many B/X house rule documents out there I couldn't find the problem addressed by anyone. Even Delta who has probably forgotten more D&D than I will ever know left me out in the cold on this topic (aside from mentioning the need to have lots of fighters to protect magic-users in mass battles).

I wanted a simple mechanic that would make the protected character "safer" in some way (benefit) while also "exposing" the protecting fighter more (cost). I quickly settled on two "ground rules" for this:

  • The fighter must be "close" (within 5' say) to the character they protect.
  • The fighter must be "aware" of an attack in order to protect against it.

I guess you could say that I am treating the fighter as an "intelligent shield" of sorts? That approach seems fair to me because I don't want a fighter to simply throw themselves over the character they are trying to protect. I very much want "protector" and "protectee" to still be "in combat" instead of changing the focus to a Secret-Service-style "Get him out of here!" and nothing else.

The simplest thing I could come up with was to simply let the fighter take the damage. So as a rule, I'd phrase it something like this:

Fighters can choose to protect one character within 5' of them. If the protected character gets hit by an attack the fighter is aware of, the fighter can choose to take the damage instead.

Not too shabby in terms of simplicity, is it? But as always the "devil" is in the details. First of all this conveys almost complete immunity to the protected character. But the goal was to make them "safer" only, not "safe". In other words, I still want the magic-user to worry about maybe having their spell interrupted. Also we have reduced the fighter to an actual meat shield ("bag of hit points") and nothing else: Regardless of how good their AC is, they now get skewered because of the (presumably) much worse magic-user AC. That's not fun at all for whoever is playing the fighter. Finally, if we have multiple fighters protecting one character, the "immunity" could actually be complete and it could go on for a long time indeed. This may work great for Delta's mass combat scenario, but on the "party in a dungeon" scale I wouldn't want to deal with it.


So instead of letting the fighter take all the damage, let's try to do something about getting hit in the first place. Here's what I came up with next:

Fighters can choose to protect one character within 5' of them. Attacks against the protected character that the fighter is aware of suffer a -8 penalty. Attacks against the fighter, however, gain a +2 bonus.

The numbers are subject to debate of course, but this seemed fair to me. Presumably the kobold archers trying to hit the magic-user would notice that the fighter brushed away all their arrows, so next round they'd redirect their fire. But as so often when you design a rule with fixed penalties/bonuses, there's an exploit: Two or more fighters protecting each other would impose a -6 penalty against most attacks, clearly not the application that was intended. Except for that terrible flaw, however, we would get the desired effects: The protected character still has to worry about getting hit, and the fighter now makes for an easier target because they are "out there" trying to distract attackers.

What finally made me think of a better solution is this: With the fixed penalty, a naked fighter could protect a character in plate and shield. That makes no sense! What should happen instead is something like this:

Fighters can choose to protect one character within 5' of them. Attacks against the protected character that the fighter is aware of must hit the fighter's armor class instead (provided that armor class is better). Any damage is still suffered by the protected character. Attacks against the fighter also gain a +2 bonus.

I am reasonably happy with this rule. It's a little more complex, but not overly so. It does allow protecting weaker party members if they have a worse armor class, but it doesn't make them (almost) completely safe. It also has no obvious exploit that I can think of, mostly by virtue of letting the better armor class prevail. I'd also say that a spell like Protection from Normal Missiles cannot be used with this rule as it doesn't affect armor class, but your mileage may vary.

I've added the last version as a house rule to said B/X game, but after two sessions so far it has not seen any use yet. Admittedly the characters were not really in situations where it made much sense to try yet, but here's to hoping someone will attempt it sometime soon. In the meantime, I'd love to hear what everybody out there thinks of what I did here.

BTW, I am aware of the 5th edition Protection Fighting Style thing but it's just too horribly complicated and too far from B/X to try to convert. It's interesting however that the 5e folks also thought that the "big fighter protecting small magic-user" trope needed "a bit more rule" behind it.

Thursday, December 29, 2016

Editions of Armor Class Part 1: No Armor

A few weeks ago one of Zenopus' excellent posts made me think about "armor class" in D&D again. On Google+ I even commented "I like that booklet too, but I really couldn't care less about AD&D armor classes. The original system just makes so much more sense. I feel a blog post coming on..." Sadly life happened and I couldn't find the time to write about the crazy in my head then. But now that I am safely tucked away in Germany for my year-end vacation, I figured I'd give it a quick shot.

Disclaimer: In true grognard-style I shall only consider "descending armor class" in the following. I don't care about "ascending armor class" systems, especially since the usual "selling point" of those doesn't apply once you use Delta's Target20 mechanic.

Let's begin with the classic AC 9 versus AC 10 debate: Should an average character who is not wearing any armor be AC 9 as OD&D, B/X, and BECMI profess, or AC 10 as AD&D wants us to believe? For me, any answer that doesn't immediately consult the combat tables is of a purely religious nature. It really doesn't matter whether we start at AC 9 or AC 10, what matters is whether there's a difference in running a combat. Here I prefer the B/X (and to some extent BECMI) story:

Two average, unarmored, untrained, "normal humans" should have a 50% chance per round to hit (and most likely kill) each other.

I don't know about you, but that seems perfectly reasonable to me. It certainly shouldn't be more than 50% because that would imply skill where (by definition) none should exist. And if it was less than 50% things would just drag out longer.

If you consult the B/X Basic Set, page B27, you'll see that a "normal man" indeed needs an 11+ to hit AC 9, a chance of 50% on a d20 roll. Perfect! As for the "most likely kill" part: On page B25 we find that a successful attack does 1-6 points of damage (average 3.5) while page B40 explains that "normal humans" have 1-4 hit points (average 2.5). Death incarnate!

In the BECMI Expert Set (but not the Basic Set, go figure!) we find the same "11+ to hit AC 9" for "normal man" on page 29; alas the BECMI Basic Set says that "normal humans" have 1-8 hit points (average 4.5) so things are a little less deadly (never mind the other problems this change causes, sigh).

What about OD&D? On page 19 of "Men and Magic" we find (perhaps surprisingly?) that "normal men equal 1st level fighters" which means they only need a 10+ to hit AC 9, a chance of 55% on a d20 roll. Granted, it's only a 5% difference, but that seems wrong to me. Why would an untrained combatant have the same skill as a trained one?

AD&D does away with "normal men" for the most part, replacing it with the notion of "0 level" characters (for which only humans and halflings qualify?). AD&D also recalibrates to AC 10 for "no armor" of course. Well, at least starting in the Player's Handbook it does, the Monster Manual seems to be written to the original AC 9 for "no armor" instead. But at least Gary manages to stay somewhat true to my B/X story: On page 74 of the Dungeon Master's Guide we learn that "0 level" characters need 11+ to hit AC 10, a chance of 50% on a d20 roll again. Of course average hit points are "off" as in BECMI, see page 88 of the Dungeon Master's Guide.

So then... What should "no armor" be, AC 9 or AC 10? As I said before, it doesn't matter! B/X (and to a lesser degree BECMI and even AD&D) get it "right" as far as I am concerned, only OD&D has it "wrong" since it doesn't distinguish trained from untrained combatants.

Of course there is one difference after all: I strongly prefer single-digit AC values, and so in the final analysis, AD&D is out. But I have to admit that this preference is mostly "religious" as well, not truly "technical" as it were. True, Target20 is easier with single-digit AC values, but since that's not a standard mechanic I can't really use it to "rationalize my irrationality" too much. Does "neater table layout with single digits" count?

Friday, October 14, 2016

The S.M.A.R.T. Overflow

Before today, I didn't realize that S.M.A.R.T. has overflow issues. I should say that I spent most of yesterday replacing three disks in my home machine's RAID-10, so I constantly was using smartctl to double-check stuff. So when I got to work today, I poked around the disks in my server. And I noticed this:

...
SMART Self-test log structure revision number 1
Num  Test_Description ... LifeTime(hours) ...
# 1  Short offline    ... 2484            ...

# 2  Short offline    ... 2469            ...
# 3  Short offline    ... 2445            ...
...

This made absolutely no sense, because I knew that most of the disks in there were certainly older than a few months. After some digging, I came to realize what the problem is:

...
SMART Self-test log structure revision number 1
Num  Test_Description ... LifeTime(hours) ...
# 1  Short offline    ... 1276            ...
# 2  Short offline    ... 62136           ...
# 3  Short offline    ... 61776           ...

...

The actual value for Power_On_Hours is 66812 for that disk. Let's subtract 1276 from that and what do we find? 65536 of course. So the life time recorded with each self-test is apparently an unsigned 16-bit value, whereas the total hours recorded for the disk itself is stored as something bigger. Here's to hoping that people who write scripts telling them which disks to swap are aware of this...

Wednesday, June 29, 2016

Choice of Class: Enforce or Encourage?

For as long as I can remember, I've been a fan of "minimum ability score"-type requirements for classes. See, for example, my "addendum" at the bottom of this post or, more recently, these posts.

It just made sense to me that fighters should have (at least) average strength and that wizards should have (at least) average intelligence. In case of the wizard, I still think it makes perfect sense even from an "in game" perspective: Why would that archmage waste his or her time training a moron? In case of the fighter, well, maybe strength is not so important: After all, one purpose of having many soldiers (whether weak or strong) on the field is just so that more can die before whatever lord commands them actually loses a battle. (The term "cannon fodder" exists for a reason.)

But more importantly, I just couldn't see why anyone would want to play a weak fighter or a dumb wizard. So the idea of using "minimum scores" to control what classes a player can pick seemed perfectly alright, even if it meant that there would be a small percentage of characters who can't qualify for any class at all: Just re-roll those, problem solved.

However, it recently dawned on me that I don't speak for everybody. (Shocker!) What if there is a player who does enjoy that weak fighter? Maybe that fighter, while physically not "up to snuff" as it were (low strength), is a master strategist and leader (high intelligence and charisma)? True, the player could choose a wizard instead, but that would make for a very different type of leader, maybe not what they were going for.

(Things get worse if you allow players to multi-class. Now the ability scores would have to be good enough for two classes, something that's not particularly likely with 3d6 even if the minimum requirements are low-ish.)

Instead of enforcing the set of classes that a given character can choose from, it may be preferable to encourage certain choices (and to discourage others of course). The "good news" is that we already have a mechanic that does just that, and it has existed in all relevant versions of D&D since 1974: Experience point adjustments for prime requisites. Here's what the B/X rules say:


The ability most important to a class is called the prime requisite for that class. The higher the prime requisite score, the more successful that character will be in that class.
Basic Rulebook, Page B6

The details are not horribly important, and you probably know them anyway, so let's just summarize that the score in a prime requisite results in a bonus (or penalty) of up to +10% (or -20%) on the experience points earned by the character. Now I should first point out that I used to hate these adjustments. The math is not too horribly bad, but we're still left with a bag of questions:

  • Exactly what ability score counts for the experience adjustment? The initial score when the character was created? The current score which might be adjusted by a magic item or a curse?
  • Does a change in a prime requisite ability score retroactively affect the current experience point total? If so, raw experience points and bonus experience points should be tracked separately.
  • What is the "in game" justification for the bonus (or penalty) on experience points? The D&D experience system is already on pretty weak grounds (exactly how does getting richer make someone better at picking locks again?) and it seems these adjustments make even less sense in that light.

Note, however, that minimum ability score requirements actually raise many of the same questions: If you're a fighter with strength 9 but get cursed to have strength 6, are you still a fighter? If you rolled a character with strength 4 but the DM grants you Gauntlets of Ogre Power at character creation, can you pick the fighter class? And what happens when you lose those gauntlets? Also, regardless of whether we consider XP = GP good or bad, that's still the default in old-school D&D, so why worry when the system gets a little more insane? It's already nuts but most people don't really mind.

What I came away with after thinking about this for the past week or so was mildly surprising to me:

After 20+ years of disliking the prime requisite experience adjustment rule, I now think it's a stroke of genius!

It allows the player to make more choices for his or her character, and that's never a bad thing (well, almost never, let's hope newbies will have a supportive DM to cut down the number of choices a little). It encourages certain choices, but it doesn't force anything. In fact, coming up with a background that explains why that dunce with intelligence 6 was able to get training as a wizard actually sounds like a lot of fun.

In my particular house rules, things get even better. Say someone wants to multi-class as a fighter/priest to approximate a paladin-like character. If they have average strength and wisdom, they simply progress on the slower XP table for characters with two classes. If they have excellent strength and wisdom for a +10% bonus on XP each, they actually progress with +20% on that harder/slower table. If they are weak (-10% penalty due to low strength) but pretty wise (+10% bonus due to high wisdom) things cancel out: It's still a viable paladin-like character.

So I am off to rewriting my house rules yet again: No more minimum ability score requirements for classes! (Hmm, should I add some for races now?)

Friday, April 22, 2016

Tracking Positions in Go: Why Composition Rocks

In this post I discuss the genesis of the streampos package for Go that I recently posted on github. I don't normally write about code I post, but in this case I learned something cute that I wanted to share. Maybe you'll find it enjoyable too?

The story starts with me writing some Go code to process XML files. Of course I used the encoding/xml package from the Go standard library to do this. A few hours into the job I needed to generate error messages when something is not quite right in the XML file. (If you've dealt with large-ish XML files that get edited manually, you probably know that it's quite easy to make the occassional mistake that syntax highlighting alone will not protect you from.) In order for those error messages to be useful, they should tell the user where in the XML file the problem was detected. And that's when I ran into trouble: There's no straightforward way to get, say, line numbers out of encoding/xml!

Sure, their code tracks line numbers in support of their error messages (check
SyntaxError for example) but if you have your own errors that go beyond what the standard library checks, you're out of luck. Well, not completely out of luck. If you're using Decoder to do stream-based XML processing, you can get something moderately useful: the InputOffset method will give you the current offset in bytes since the beginning of the stream.

What do you have to do to turn that into error messages of the kind users expect, so error messages in terms of line (and maybe column) numbers? First you somehow have to get your hands on the raw input stream. Then you look for newlines and build a data structure that allows you to map a range of offsets in the stream into a line number. With a little more code on top, you even get out column numbers if you want them. Sounds like fun, but just how should we do it?

To use Decoder you have to give it a Reader to grab input from. This is promising because Go very much prides itself in the power that simple interfaces like Reader and Writer provide. Indeed, the pattern of wrapping one Reader inside another (or one Writer inside another) is fundamental in much of the standard I/O library. But we can also find those interfaces in "unrelated" parts of the library: The Hash interface, for example, is a Writer that computes hashes over the data written to it.

So the Decoder wants a Reader, but thanks to interfaces any Reader will do. It's not a big leap to think "Hey, I can hack my own Reader that tracks line numbers, and I'll pass that one to Decoder instead of the original Reader!" That's indeed what I considered doing for a few minutes. Luckily I then realized that by hacking a Reader I am actually making myself more problems than I had to begin with.

I want to solve the problem of building a mapping from offsets to line numbers. However, as a Reader, I also have to solve the problem of providing data to whoever is calling me. So whatever the original problem was, as soon as I decide to solve it inside a Reader, I immediately get a second problem. And it's not an entirely trivial problem either! For example, I have to consider what the code should do when asked to provide 37 bytes of data but the underlying Reader I am wrapping only gave me 25. (The answer, at least in Go land, is to return the short read. But notice that it's something I had to think about for a little while, so it cost me time.) On a more philosophical level, inserting a Reader makes my code an integral part of the entire XML thing. I never set out to do that! I just wanted a way to collect position information "on the side" without getting in anybody's way.

In the specific case of encoding/xml things actually get even funnier. It turns out that Decoder checks whether the Reader we hand it is actually a ByteReader. If it's not, Decoder chooses to wrap the Reader we hand it again, this time in a bufio.Reader. So either I have to implement a ByteReader myself, or I have to live with the fact that plugging in my own Reader causes another level on indirection to be added, unnecessarily to some extent. (That's yet another problem I don't want to have to deal with!) There really should be a better way.

And there is: Instead of hacking a Reader, just hack a Writer! I can almost see you shaking your head at this point. "If the Decoder wants a Reader, what good is it going to do you to hack a Writer?" I'll get to that in a second, first let's focus on what being a Writer instead of a Reader buys us.

The most important thing is that as a Writer, we can decide to be "the sink" where data disappears. That is, after all, exactly what the Hash interface does: It's a Writer that turns a stream into a hash value and nothing else, the stream itself disappears in the process. What's good enough for Hash is good enough for us: We can be a Writer that turns a stream into a data structure that maps offsets to line numbers. Note that a Reader doesn't have this luxury. Not ever. True, there could be Readers that are "the source" where data appears out of thin air, but there are no (sensible) Readers that can be "the sink" as described above.

A secondary effect of "being the sink" is that we don't have to worry about dealing with an "underlying Writer" that we wrap. (As a Reader, we'd have to deal with "both ends" as it were, at least in our scenario.) Also, just like in the case of Hash, any write we deal with cannot actually fail. (Except of course for things like running out of memory, but to a large degree those memory issues are something Go doesn't let us worry about in detail anyway.) This "no failures" property will actually come in handy.

Okay, so those are all nice things that will make our code simpler as long as we hack a Writer and not a Reader. But how the heck are we going to make our Writer "play nice" with the Decoder that after all requires a Reader? Enter a glorious little thing called TeeReader. (No, not TeaReader!) A TeeReader takes two arguments, a Reader r and a Writer w, and returns another Reader t. When we read from t, that request is forwarded to r. But before the data from r gets returned through t, it's also written to w. Problem solved:

lines := &streampos.Writer{}
tee := io.TeeReader(os.Stdin, lines)
dec := xml.NewDecoder(tee)

There's just one small problem with TeeReader: If the write we're doing "on the side" fails, that write error turns into a read error for the client of TeeReader. Of course that client doesn't really know that there's a writer involved anywhere, so things could get confusing. Luckily, as I pointed out above, our Writer for position information never fails, so we cannot possibly generate additional errors for the client.

I could end the post here. But I don't want to sweep under the rug that there's a little cheating going on. Where? Well, you see, TeeReader is not a ByteReader either. So regardless of how nice the code in our Writer is, and regardless of how cute the setup above is, we incur the cost of an extra indirection when NewDecoder decides to wrap the TeeReader. What we're doing is shoving the problem back to the standard library. It's possible that TeeReader will eventually grow a ReadByte method at which point the needless wrapping would cease. However, that's not very likely given what TeeReader is designed to do. But note that this concern arises specifically in connection with encoding/xml. There are probably many applications that do not require methods beyond the Reader interface.

Speaking of other applications. In the Go ecosystem, interfaces such as Reader and Writer are extremely prominent. Lots of people write their code to take advantage of them. The nice thing is that streampos.Writer coupled with TeeReader provides a generic way to handle position information for all applications that use a Reader to grab textual data. Of course not all applications do, and not all applications will be able to take full advantage of it. But if you're writing one that does, and if you want to have position information for error messages, well, it's three lines of code as long as you already track offsets. And you have to track something yourself because after all only your application knows what parts of a stream are interesting.

I very much like that Go encourages this kind of reusability by composing small-ish, independently developed pieces. (Actually, that even confirms a few of the claims I made in my 2003 dissertation. Yay me!) The only "trouble" is that there are already a few places in the standard library where similar position tracking code exists: At the very least in text/scanner and in the Go compiler itself. Whether that code could use my little library I don't know for sure, maybe not. But I guess it should be a goal of the standard library to refactor itself on occasion. We'll see if it does...

One last note: I've been teaching a course on compilers since 2001, and since about 2003 I've told students to use byte offsets as their model of positions. I've always sold this by explaining that offsets can be turned into lines and columns later but we don't have to worry about those details in the basic compiler. Strangely enough I never actually wrote the code to perform that transformation, until now that is. So once I teach the course mostly in Go, I can use my own little library. Neat. :-)

Tuesday, April 19, 2016

Terminal Multiplexers: Simplified and Unified

I have a love-hate relationship with terminal multiplexers. I've been using screen for years, but mostly on remote servers, and mostly just to keep something running on logout and later get back to it again. But on my home machine or my laptop, I've avoided terminal multiplexers like the plague, mostly because of their strange (or is "horrid" more appropriate?) user interfaces.

For a really long time now, I've simply used LXTerminal and its tabs as a crutch, but I've recently grown rather tired of that approach. When you're writing (more or less complicated) client/server software, it really pays off to have both ends running side-by-side: switching tabs, even with a quick key combination, gets old fast. Also LXTerminal lacks quite a few features, true-color among them. What I really wanted to use was st, but that lean beast doesn't even have a scrollback buffer (so forget about tabs or a contextual menu).

Wait, so why don't I just use a tiling window manager like dwm and open several terminals? Sadly I've been a spoiled Openbox person since 2008 and a spoiled "overlapping windows" person since 1987 or so (thank the Amiga for that). I like the idea of a tiling window manager exactly for a bunch of terminals and not much else. In the long run I may actually become a dwm nut, but not just yet.

So I had to face it, the time was right to actually learn a terminal multiplexer for real. But which one? For text editors there's an easy (if controversial) answer: just learn vi or emacs, both of those you're likely to find on any UNIX system you may ever have to work with. (Heck even busybox has a vi clone.) That seems to suggest that I should spend time on learning screen for real: It's the oldest terminal multiplexer out there, so it's most likely to be available just about everywhere.

The only problem is that it sucks. If you want to see why it sucks, just start htop in screen. I don't care if that's a terminfo problem or not, in screen the htop interface looks messed up. It's still usable, but you know, the eyes want to be pleased as well (yes, even in the terminal). But it sucks even more: Just split the terminal and then run cat </dev/urandom in one of the splits. Chances are you'll get random garbage spewed all over the other split as well. Doesn't inspire much confidence in how well-isolated those splits are, does it?

So of course I tried tmux next, a much more recent project and maybe "better" because it doesn't have as much buggy legacy code. Sadly it immediately fails the htop test as well, but at least it does a little better when hit with the random hammer: No more spewing into the other split, but still the status line gets trashed and once you stop hammering some of the UI elements are just a little out of it. A little groggy most likely?

One more attempt, let's try dvtm instead. If you don't know, that's a dwm clone inside the terminal. And wow, it passes both the htop test and the random hammer with flying colors, only a few of the border elements get trashed. On the downside, it's least likely to be installed on a random UNIX system you need to work on. That, and it has a bunch of opinionated layout assumptions that may not be to your liking. I, however, like them just fine, at least for the most part.

At this point I started asking myself what I actually need in my terminal multiplexer, and I arrived at a rather short list of features:

  • create a new terminal window with a shell running in it (splitting the terminal area horizontally or vertically or at least sanely)
  • switch from one terminal window to another quickly and reliably
  • destroy an existing terminal window and whatever is running in it

That's really it as far as interactive use on my home machine or laptop is concerned. Sure, being able to detach and reattach sessions later is great, especially when working remotely. But for that use-case I already have screen wired into my brain and fingers. (Also it turns out that dvtm doesn't persist sessions in any which way, so I'd need to use another tool like dtach or (more likely) abduco with dvtm.)

So what am I to do? Which one of these am I to learn and put deep into my muscle memory over the next few weeks? That's when it struck me: I can probably learn none of them and all of them at the same time! After all, I have very few requirements (see above) and as luck would have it, all of the tools are highly configurable. Especially in regards to their keybindings! Can it be done?

Can I configure all three tools in such a way that one set of keybindings will get me all the features I actually need?

Let's see what we find in each program for each use-case, then try to generalize from there. Let's start with screen:

  • CTRL-a-c create a new full-sized window (and start a shell)
  • CTRL-a-n switch to the next full-sized window
  • CTRL-a-p switch to the previous full-sized window
  • CTRL-a-S split current region horizontally (no shell started)
  • CTRL-a-| split current region vertically (no shell started)
  • CTRL-a-TAB switch to the next region
  • CTRL-a-X remove current region (shell turns into full-sized window)
  • CTRL-a-k kill current window/region (including the shell)
  • CTRL-a-d detach screen

So obviously the concepts of window and region are a bit strange, but for better or worse that's what we get. I am pretty sure that regions (splitting one terminal into several "subterminals") was added later? In terms of my uses cases it's a bit sad that creating a new region does not also immediately launch a shell in it, instead I have to move to the new region with CTRL-a-TAB and then hit CTRL-a-c to do that manually. It's also strange that removing a region doesn't kill the shell running in it but turns that shell into a "window" instead. And of course navigating windows is different from navigating regions. Let's see how tmux does it:

  • CTRL-b-c create new full-sized window (and start a shell)
  • CTRL-b-n switch to next full-sized window
  • CTRL-b-p switch to previous full-sized window
  • CTRL-b-" split current pane horizontally (and start a shell)
  • CTRL-b-% split current pane vertically (and start a shell)
  • CTRL-b-o switch to next pane
  • CTRL-b-UP/DOWN/LEFT/RIGHT switch to pane in that direction
  • CTRL-b-x kill current pane (as well as shell running in it)
  • CTRL-b-& kill current window (as well as shell running in it)
  • CTRL-b-d detach tmux

So we can pretty much do the same things, but not quite. That's annoying. Note how creating a new "pane" launches a shell and how killing a "pane" kills the shell running in it? That's exactly what screen doesn't do. Also note that tmux has a notion of "direction" with regards to panes, something completely lacking in screen. And note that "windows" are still different from "panes" in terms of navigation. Finally, what's dvtm like?

  • CTRL-g-c create a new window (and start a shell, splitting automatically)
  • CTRL-g-x close current window (and kill the shell running in it)
  • CTRL-g-TAB switch to previously selected window
  • CTRL-g-j switch to next window
  • CTRL-g-k switch to previous window
  • CTRL-g-SPACE switch between defined layouts
  • CTRL-\ detach (using abduco, no native session support)

Note that dvtm's notion of "window" unifies what the other two tools call "window" (full-sized) and "region" or "pane" (split-view). But the biggest difference is that dvtm actually has an opinion about what the layout should be. You can choose between a few predefined layouts (with CTRL-g-SPACE) but that's it. One of these layouts shows one "window" at a time, but the navigation between "windows" stays consistent regardless of how many you can see in your terminal.

So what's the outcome here? Personally, I very much prefer what dvtm does over what tmux does over what screen does. Obviously your mileage may vary, but what I'll try to do here is make all the programs behave (more or less, and within reason) like dvtm.

Of course there are plenty of issues to resolve. For starters, two of the programs care about whether a split is vertical (top/bottom) or horizontal (left/right) while one doesn't. The simplest solution I can think of is to have separate key bindings for each but to map them to the same underlying command in dvtm. This way I'll learn the key bindings for the worst case while enjoying the features of the best case.

Next screen doesn't automatically start a shell in a new split while the other two do. Luckily that's easy to resolve by writing new and slightly more complex key bindings. There are a few additional issues we'll get into below when we talk about how to configure each program, but first we need to "switch gears" as it were: It's time to do some very basic user interface design.

Let's face the most important decision first, namely which "leader key" should I use? We have CTRL-a, CTRL-b, and CTRL-g as precedents, but we don't necessarily have to follow those.

I played with various keys for a while to see what's easiest for me to trigger. My favorite "leader keys" would have to be CTRL-c and CTRL-d. Sadly, those already have "deep meaning" for shells, so I don't want to use either of them. The next-best keys (for my hands anyway) would be CTRL-e and CTRL-f. Sadly, CTRL-e is used for "move to end of line" in bash and I tend to use that quite a bit; screen's CTRL-a sort of shares that problem. In terms of "finger distance" I also find CTRL-a "too close and squishy" and CTRL-b/CTRL-g "too far and stretchy" for myself.

Based on this very unscientific analysis, I ended up picking CTRL-f as the "unified leader key" for all three multiplexers. The drawback of this is that I'll train my muscle memory for "the wrong key" but between three programs, any key would have been "the right key" only for one of them anyway. Big deal. (Yes, in bash CTRL-f triggers "move one character forward" but that's not really a big loss either, is it?)

With the leader settled, how shall we map each use-case to actual key strokes? The guiding principle I'll follow is to rank things by how often I am likely to do them. (Of course that's just a guess for now, if it turns out to be wrong I'll adapt later.) I believe that the ranking is as follows:

  1. Navigate between splits.
  2. Create a new split.
  3. Remove the current split.

If that's true, then moving to the next split should be on the same key as the leader, so the sequence would be CTRL-f-f. That even works mnemonically if we interpret "f" as "forward" or something. Now let's find keys to create splits. At first I thought "something around f" would be good, the idea being that I'd hit the key with the same finger I used to hit "f" a moment before. But that's actually slower than hitting a key with another finger on the same hand. The way I type, I hit "f" with my index finger and my middle finger naturally hovers over "w" in the process. So CTRL-f-w it is, after all that's again a mnemonic, for "window" this time. But what kind of split should it be, horizontal or vertical? I settled on horizontal because in dvtm's default layout, the first split will also be horizontal (left/right). So I'll get some consistency after all. The other key that's quick for me to hit is "e" so CTRL-f-e shall be the vertical (top/bottom) split. That leaves removing the current split, and for that CTRL-f-r is good enough, again with some mnemonic goodness. Here's the summary:

  • CTRL-f-f switch to next split
  • CTRL-f-w split horizontally (and start a new shell)
  • CTRL-f-e split vertically (and start a new shell)
  • CTRL-f-r remove current split (and terminate the shell)

Sounds workable to me. All that remains is actually making the three programs behave "similarly enough" for those keystrokes. As per usual, we'll start with screen. The file to create/edit is ~/.screenrc and here's what we'll do:

hardstatus ignore
startup_message off
escape ^Ff
bind f eval "focus"
bind ^f eval "focus"
bind e eval "split" "focus" "screen"
bind ^e eval "split" "focus" "screen"
bind w eval "split -v" "focus" "screen"
bind ^w eval "split -v" "focus" "screen"
bind r eval "kill" "remove"
bind ^r eval "kill" "remove"

I'll admit right away that my understanding of "hardstatus" is lacking. What I am hoping this command does is turn off any status line that would cost us terminal real estate: I want to focus on the applications I am using, not on the terminal multiplexer and what it thinks of the world. The "startup_message" bit just makes sure that there's no such thing; many distros disable it by default anyway, but for some strange reason Ubuntu leaves it on. (I am all for giving people credit for their work, but that message requires a key press to go away and therefore it's annoying as heck.)

The remaining lines simply establish the key bindings described above, albeit in a somewhat repetitive manner: I define each key twice, once with CTRL and once without, that way it doesn't matter how quickly I release the CTRL key during the sequence of key presses. (If you have a more concise way of doing the same thing, please let me know!)

We should briefly look at what we lose compared to the default key bindings. Our use of "f" overrides "flow control" and luckily I cannot think of many reasons why there should be "flow control" these days. Our use of "w" overrides "list of windows" but since I intend to mostly use splits that's not a big problem. Our use of "e" comes for free because it's not used in the default configuration. Finally, our use of "r" overrides line-wrapping, something I don't imagine caring about a lot. So we really don't lose too much of the basic functionality here, do we?

Next we need to configure tmux. The file to create/edit is ~/.tmux.conf and here's how that one works:

set-option -g status off
set-option -g prefix C-f
unbind-key C-b
bind-key C-f send-prefix
bind-key f select-pane -t :.+
bind-key C-f select-pane -t :.+
bind-key w split-window -h
bind-key C-w split-window -h
bind-key e split-window
bind-key C-e split-window
bind-key r kill-pane
bind-key C-r kill-pane

Different syntax, same story. First we switch off the status bar to get one more line of terminal real estate back. Then we change the leader key to CTRL-f and define (repetitively, I know) the key bindings we settled on above. Easy. (Well, except for figuring out the select-pane arguments, I need to credit Josh Clayton for that. Yes, it's in the man page, but it's hard to grok at first.)

What do we lose? By using "f" we lose the "find text in windows" functionality, something I don't foresee having much use for. By using "w" we lose "choose window interactively" which seems equally useless. Luckily "e" is once again a freebie, a key not used by the default configuration. Finally, by using "r" we lose "force redraw" which is hopefully not something I'll need very often. Seems alright by me!

Last but not least, let's configure dvtm. In true suckless style there is of course no configuration file because that would "attract too many users with stupid questions" and the like. (Arrogance really is bliss, isn't it? Let me just say for the record that I really appreciate the suckless ideals and their software. But that doesn't change the fact that configuration files are convenient for all users (not just idiots!) who don't want to manually compile every last bit of code on their machines. But I digress...) So we have to edit config.h and recompile the application, which in Gentoo amounts to (a) setting the savedconfig USE flag, (b) editing the file

/etc/portage/savedconfig/app-misc/dvtm-0.14

and then (c) re-emerging the application. Here's the (grisly?) gist of it:

#define BAR_POS BAR_OFF
#define MOD CTRL('f')
...
{{MOD, 'f',}, {focusnext, {NULL}}},
{{MOD, CTRL('f'),}, {focusnext, {NULL}}},
{{MOD, 'w',}, {create, {NULL}}},
{{MOD, CTRL('w'),}, {create, {NULL}}},
{{MOD, 'e',}, {create, {NULL}}},
{{MOD, CTRL('e'),}, {create, {NULL}}},
{{MOD, 'r',}, {killclient, {NULL}}},
{{MOD, CTRL('r'),}, {killclient, {NULL}}},

There are a few more modifications I didn't show, but just to remove existing key bindings that conflict with ours. Which brings us to the question what we lose. Our use of "f" costs us the ability to choose one specific layout, something I can live without for sure. Our use of "w" is a freebie, it's unused by default. Our use of "e" costs us "copymode" which, so far, I didn't need; eventually I may have to revisit this decision and maybe remap the functionality elsewhere. Finally, our use of "r" costs us being able to "redraw" the screen, something I hope I won't need too much.

Wow, what a project. Had I known I'd spend about 10 hours on learning all the relevant things and writing this blog post, maybe I would not ever have started. But now that it's all done, I am actually enjoying the fruits of my labor: Splitting my terminal regardless of what ancient UNIX system I am on? Using fancier tools on newer machines, including my own? Debugging client/server stuff with a lot less need to grab the mouse and click something? It's paradise! Well, close to it anyway...

Update 2016/04/23: My "unified" configuration files are now available on github.com if you want to grab them directly.

Tuesday, March 1, 2016

Revision Control Matters

I am currently teaching our project course in video game design again. I decided early on that all student teams should use a revision control system, specifically git through Bitbucket, to coordinate their work. Sadly I did run into some resistance regarding this requirement with many students stating that Dropbox and Google Drive are much more convenient for them. So I thought I'd ask around among my game development friends for opinions, and I got a few. But first let me paraphrase my "official" question to them:

In your esteemed opinion, how important is it for people working on a video game together to be able to use a revision control system to coordinate their work? How many of you have, in your gaming work, been able to get away without using one? Do you think it's less important for artists? How about artists who not only produce art (graphics, sound, etc.) but also write code/scripts?

The answers below are in no particular order. For some I've cleaned up the grammar a little. I've also "anonymized" the people involved because I wasn't sure how they'd like being quoted in public. But rest assured that they are all experienced software developers who've done at least several gaming projects. Let's start with JB:

This is beyond question. Just no question. ... I would never work without a revision control system, even working by myself. Working with a team, running the committed code through automated tests every night, (yelling at the person who broke the code is beyond satisfying), staying in sync, knowing that everything everyone is doing will continuously still work together—mandatory. You will encounter crunch-time bugs that can only be found by rolling back trough revisions until the bug doesn't exist. And you want to be creative and unlimited? Branching ... so you can try out something radical is priceless! ... You will learn why you need revision control systems eventually, no matter how thick you are. But at the end of the day, you want to work for a good company with excellent people, and they aren't going to want you unless you are on board with revision control systems. End of story. It's a career requirement. So suck it up buttercup.

Next we have AD with a slightly different opinion:

I would not force students who are resisting. Things like Dropbox provide data sharing which is most of what 1–2 programmer teams need. If they lose their code you can always make fun of them. Personally I always use revision control but my games are very code-centric and I came into game development with a software engineering background. One of the things I like about games is that nobody cares about code or tools—only results. Most software engineers are evaluated by other software engineers and a lot of quasi-religious groupthink comes out of that. Game development is a reality check: Can I really use these software skills to produce something normal people want? Software engineering classes are a better place to force students to learn about revision control, the game development kids should focus on making fun games with minimum friction.

I should say that the student teams in our course are 4–5 students, not 1–2 as AD had assumed. And please note that AD still holds students responsible in case their ad-hoc approach fails. Next we have DC:

For the past few weeks I've been working on a simulator entirely on my own and I'm still kicking myself for not using version control: I've noticed a change in the outputs in the last couple days that is inexplicable and I don't have a recent snapshot I can go back to to track down what caused it. Certainly I've never worked at a game company, or heard about any acquaintances doing so, that failed to use version control. It's a universal standard. Should it be required of the students? Arguments in both directions: (a) using it gets them experience with an industry standard; (b) not using it will force them to confront the problems that crop up, and deeply understand why they want version control in the future. ... I usually side with (a) although I was just reading a part of the JHU orientation which ... emphasizes students learning more on their own.

It's true that we try to encourage "learning on your own" and I guess my courses are particularly "infamous" in that regard. However, I would still hold that it's appropriate to force students to do it, in a company environment they most likely would be "forced" as well. But hey, now I am throwing too much of my own perspective in here. Let's get to AS:

It's a requirement!

That's certainly the most succinct reply I received. So on to SC:

I'd say it's a requirement for the code base at the very least. Git or Perforce can be punishing with Unity files, but it's ultimately best. Art I'm more flexible with. An old job tried to handle art with Git and it was bad. Currently our artists maintain stuff on their own Perforce server (since it's unlikely that more than one artist is working on any given asset) and then upon completion things move to the Unity repository.

The point here is that art assets are hard to merge automatically and so a system that allows "locking" is preferable over one that forces merging. Note, however, that there's still a revision control system in place, albeit a different one. I don't have a way to give students Perforce licenses, but maybe Fossil 2.0 will be an option in the future. Of course Subversion supports locking as well, it just feels slightly antiquated these days. Next we have BR:

I was able to get away without one... In Nineteen Ninety Freaking Six!

Speaks for itself, doesn't it? On to NMC's opinion:

Sometimes, for a small project with only one person working on it, you can get away just periodically taking your entire project folder and compressing it to a zip file. This is barely adequate for one person working alone on a recreational project, and even in that situation version control would make your life easier.

I think I am starting to see a pattern here, don't you? Finally we have JC (who refers back to BR above):

Is it essential for every project? No. Is it useful for every project? Yes. A career hallmark of game development (and maybe the tech industry itself) is enjoying learning new things constantly. There is always a better way. Either you love that or you think you've learned enough. The people who think they know enough don't tend to last or have passed into a phase of their career that likely won't last in a satisfying way. Our current view is Git (or Hg) for code and Perforce for content. They just make life easier. For the size of Unity projects you will encounter in college, Git (or Hg) alone is totally fine.

Git is something that everyone hates and don't understand if they use it rarely. If they use it regularly, they usually love it. It's crazy fast and lets you do all the things you want to do. Perforce is nice in that it is rock-solid and handles large binaries well. But you also lose information everytime you merge (unlike a distributed system) and branching is painful, particularly after the fact. And, of course, it isn't free ...

I probably agree with BR: let them do whatever but no excuses accepted for lost work. That is the reality. But it's definitely a plus for us as a company when an entry-level candidate has real experience with the major source control systems. For one, it's just a skill like anything else. But, second, it gives some indication that they've done enough work to understand the value.

On a side note: BR "got away with it" until I was hand-merging changes from him and SM every day ... in addition to writing the core game systems. To his credit, BR fully supported the change. And, agh, we used SourceSafe. That I would not recommend.

Seems to me that this (biased?) sample of opinions tends to agree with my feeling that every student should learn how to work with a revision control system. So I for my part feel "vindicated" as it were.

If you're a student in one of my project courses and you hate the idea of learning a revision control system, re-read these comments from industry professionals a few times. Then ask yourself if you'd rather be able to say, truthfully, that you learned something like git or if you'd rather apologize that you didn't. Your call, but I am almost certain that your answer will have an effect on your chances of getting hired.

(I'd like to thank everybody who answered my question back when I posted it on Facebook. You're the best!)