*exceptional*characters instead. Out of a million characters rolled up, how many are exceptional? Well, what do we mean by exceptional? Maybe a good approximation would be the total sum of all ability modifiers? For simplicity let's just use the "standard" scale from -3 to +3 for all abilities. Here are a few examples for "amazing" characters as well as the chance for rolling one:

Class | S | I | W | D | C | X | Total | Chance |
---|---|---|---|---|---|---|---|---|

Cleric | 15 | 16 | 18 | 12 | 16 | 18 | +11 | 0.0001% |

Fighter | 18 | 11 | 16 | 17 | 14 | 18 | +11 | 0.0002% |

Magic-User | 9 | 18 | 17 | 18 | 13 | 16 | +11 | 0.0003% |

Thief | 18 | 10 | 15 | 18 | 16 | 13 | +10 | 0.0016% |

Dwarf | 18 | 13 | 14 | 16 | 18 | 18 | +13 | 0.0001% |

Elf | 18 | 18 | 14 | 14 | 18 | 15 | +12 | 0.0001% |

Halfling | 18 | 13 | 17 | 16 | 18 | 15 | +12 | 0.0002% |

You may have guessed it, rolling up one of these monsters is basically impossible. The best result (certainly not a systemic issue, just a bunch of lucky rolls) is that 16 out of 1 million thieves are this awesome. That's not a lot, certainly not enough to ever actually see one of these characters in your games. But let's scale back a little bit: A character is pretty decent already if you get a total of +3 or more in terms of modifiers (and pretty bad if you get -3 or less) so what's the chance of rolling that? Better it turns out:

Class | Chance >= +3 | Chance <= -3 |
---|---|---|

Cleric | 11.65% | 12.79% |

Fighter | 11.57% | 12.75% |

Magic-User | 11.54% | 12.79% |

Thief | 11.60% | 12.75% |

Dwarf | 14.59% | 8.05% |

Elf | 15.51% | 8.33% |

Halfling | 19.53% | 4.61% |

It's a little surprising (for me anyway) that human characters have a better chance of being "bad" than "good" according to my simulation. Demi-human characters, on the other hand, have a better chance of turning out "good" as it were. Halflings are especially lucky in this regard, so maybe the next time you roll up a character who actually qualifies for being a Halfling, you should really go for one of those little buggers?

(Sorry, I am sitting at my dad's weird Windoze box in Germany, so I don't have access to my usual Python toolbox of visualizations. I was going to plot the actual distributions for you, alas I'll have to add those things at a later date. I hope you still enjoyed reading what I have.)

Since you discard the characters who don't meet minimum requirements, doesn't it follow that demi-men will be better than men? They've already qualified for a stricter class. So I think you have it backward: Hobbits are the best because the characters who can be hobbits are on average better than anyone else.

ReplyDeleteThat being said, they actually are better. Their bonuses work while they're in plate armor, and their smaller weapons are not that much of a hassle. You could even house rule a small Pole Arm, so they could get a 1d8 weapon. So with a hobbit, you get an evasive fighter with a couple of Thief abilities. The level limit doesn't hurt much in either edition, either theoretically or in practice. in BX you're only going to 14 and you already get better saves than the fighter. In BECM, you additionally get attack ranks and weapon mastery- the big drawback is hit points at levels 15+.

It's even better to be a Hobbit in Holmes, where everyone stays little, and all weapons do 1d6.

I am not sure that my discarding unsuitable characters for demi-humans has much to do with the statistics in this post. Delta would know for sure, alas I am not certain he's reading along. My unqualified reasoning goes like this: If I had not "culled" the unqualified halflings for example, I would still be left with about 500,000 of them (that's just how the minimum requirements work out). And whether I do the remaining process with 500,000 base or 1,000,000 base individuals I don't think makes much difference. But I can play with the code and re-run the stats to make sure.

DeleteI'm really enjoying your statistical analysis. My current practice is to have players roll 3d6 in order, but then roll 3d6 a seventh time and swap out the results with any stat of their choice. In practice, this tends to avoid "unplayable" characters without unduly increasing the really high-range scores, but I've not yet figured out what it really does statistically.

ReplyDeleteI like it. I've also heard of folks using 7 rolls and then letting players assign freely among abilities and starting gold. Whatever makes folks happy is what I say. :-)

DeleteIn my (currently on hiatus, sadface) "Expedition to the Borderlands" B2-inspired micro hexcrawl I house-ruled that characters with a total of -2 or worse in modifiers can be re-rolled.

In my (never really started, sadface) campaign game I actually use Greg Gorgonmilk's glorious invention: http://gorgonmilk.blogspot.de/2013/02/ability-score-rolling-matrix.html

Nice article, I lopve exploring the statistical/stochastical reality behind D&D dicerolling. I do think you made some fundamental errors in presenting your data. The results and percentages you post only counts for the population of a million characters you created with your programm and not necessarily about the whole set of possible characters. For example the 3d6 * 6 in order method yields 101+ Trillion possible dieroll combinations and 16+ million possible stat combinations (averaging at 6+ million dierollcombinations per possible stat combination, ranging from 1 for all 3's or 18's to 387+ million for all 10's). The best way, in my oppionion, to look at possible d&d character stats for the 3d6 in order method is to work with a matrix (in excel for instance) and do some number crunching with all possible dieroll combinations. This method shows that classes/races with requirements (using the method where you discard rolls that do not meet the requirements) do yield better scores than characterclasses without requirement. One requirement of a 9 or 2 9's in a score yields an average overall score of 10.7 and 10.9 compared to 10.5 and an average modifier of 0.4 and 0.8 respectively. That should explain the halfing scores above. The exact calculations are probably a bit too much to post here but if you're interested I can send you the excel file.

ReplyDeleteYou are certainly correct that I only look at a sample instead of the entire universe. But that's how people usually derive meaningful distributions and averages as far as I recall my statistics courses. To be honest, I never considered trying to look at all possible die rolls although as you point out that would give the most solid analysis, basically in line with what a theoretical derivation would show as well. I felt that a sample of 1 million is sufficient, but there's always room for improvement. :-)

DeleteI'm not a statistican/stochastocan but what I remember from my biological science background is that the most important choice in statistics is what and how you calculate things based on what you want to say about them. In this case a character rolled with 3d6 in order is 1 outcome out of a large but definately finite landscape of 101+ trillion possibilities. A large number but it can be contained in a 16 (3 to 18) by 6 (6 stats) grid of chances per statnumber.

DeleteGranted, as soon as you figure in choices like swapping 2 stats, assign as you like and the option of lowering specific stat to raise another, the grid won't be enough and you'll probably be faster using sampling, even then a sample of 1 million in a landscape of 16+ million possible unique outcomes (16^6):

stat S I W D C X

3 1 1 1 1 1 1

4 3 3 3 3 3 3

5 6 6 6 6 6 6

6 10 10 10 10 10 10

7 15 15 15 15 15 15

8 21 21 21 21 21 21

9 25 25 25 25 25 25

10 27 27 27 27 27 27

11 27 27 27 27 27 27

12 25 25 25 25 25 25

13 21 21 21 21 21 21

14 15 15 15 15 15 15

15 10 10 10 10 10 10

16 6 6 6 6 6 6

17 3 3 3 3 3 3

18 1 1 1 1 1 1