[RPG] Instead of rolling for every creature hit by a spell, can we reasonably use a single additional die for the number of hits

dnd-5ehouse-rulesspells

I will preface this question by stating that the hypothetical situation proposed below will obviously only work as long you have a die with a number of faces equal to the number of enemies.

Scenario

Suppose we have a party of PCs and they encounter 8 identical hostile enemies. Once of the PCs is a wizard, and decides to cast Fireball in such a way that it will hit all 8 enemies at once. Since we don't roll to-hit with Fireball, all the enemies just make a Dexterity saving throw. These rolls would normally be rolled for each discrete enemy for a total of 8 individual rolls.

Question

Would it be entirely unfair to roll a single d20 and a single d8, and if the d20 roll succeeds on the Dex save, rule that the 1d8 roll is the number of enemies take half (or full) damage, and the rest take full (or half) damage? Conversely, if the Dex save fails, all enemies take full damage.

Thoughts

I'm considering an approach to speeding up dice rolls for large packs of enemies, but I can't really tell outright if the above scenario is unfair. My first impression is that it might skew based on saves or AC (if we're rolling to-hit instead of for saves) and number of enemies.

For the example let us consider two sets of outcomes: one for a pack of Kobolds, and one for a group of Bandits.

I know this all essentially boils down to dice math, but I'm not the greatest at it and would appreciate the help evaluating whether this is fair, and what the reasonable thresholds are.

Best Answer

As everyone noticed,

Your system makes AOEs stronger.

The average damage you get is higher, sometimes by a large factor.

We can fix it.

  • Make two saves -- roll 2d20.

  • If both fail/pass, everyone fails/passes.

  • If one fails and one passes, roll 1d8 (for 9 creatures) to determine how many fail (all or none should not be possible).

This has a higher variance, but exactly the same average, and emulates rolling 9d20 saves reasonably.

Proof:

Suppose your individual creature has a probability \$P\$ of saving. Then the expected number of creatures making the save is \$9P\$.

Meanwhile, the 2d20 trick has 3 possibilities:

  • both rolls save; the chance of this happening is \$P^2\$.
  • both rolls fail; the chance of this happening is \$(1-P)^2 = 1-2P+P^2\$.
  • one save, one fail; the chance of this is \$2P(1-P) = 2P-2P^2\$. (The factor of 2 is because if you have a red and a black d20, the red one could pass while the black one fails, or vice versa.)

If two saves means that all 9 creatures pass, two fails means that all 9 creatures fail, and 1 save 1 fail means that 1d8 creatures pass, then the average number of creatures that pass will be:

$$ 9P^2 + 0(1-2P+P^2) + 4.5 \times 2(P-P^2) = 9P^2 + 9P - 9P^2= 9P. $$

This is the same average as rolling 9 saves individually.

The variance is significantly higher than in the "real" case. The chance that all pass is \$P^2\$ in this case; in the original case it was \$P^9\$, a much lower value. The same is true of everyone failing. The middle probabilities are also flat and fatter than in the real case.

But you'll get the feel of "a random number of foes pass/fail", and a similar average.

Note that higher variation is typically slightly harmful to PCs, as on average they win fights; when you are in the lead, you want less variation.

However...

Really, calculate what you need to roll (say a 13 to save). Then rolling 9d20 and counting 13 and above is really not hard, especially if you have at least 3d20 to roll at once.

The trick is to work out the threshold on the d20 before rollling the pool, so you don't have to do per-die math, just count that which passes the threshold.

When you need such a system:

If you had 100 creatures who had to make a saving throw, rolling 100 dice and counting starts getting really painful (barring an automated solution).

What I'd do is roll 5 saves (so 5d20). Multiply number of successes by 20, giving us either 0, 20, 40, 60, 80 or 100. Then add 1d20 and subtract 1d20. That many pass the save (capped by "everyone" and "noone")

Much like the smaller model above, the variance remains much smaller than the "real" case here, but the average remains identical. And you'll get a distribution where the tails are less likely than the middle.

Or Use Statistics

Var(1d20>=X) = (20-X)(X)/400

This is bounded above by 1/4.

Variance is linear for independent events, so Var(roll 100 d20, how many are >= X) is 100 * (20-X)(X)/400, and bounded above by 100/4.

Standard deviation is Sqrt(Var), which is bounded above by Sqrt(N)/2.

Var(K*(1d12-1d12)) = K^2*143/6, SD(K*(1d12-1d12))=~K4.88.

If we set Sqrt(N)/2 = K4.88, we get Sqrt(N)=K9.76, or K=~Sqrt(N)/10.

What this means is if we want to emulate really accurately how many out of a huge number of targets make or fail a saving throw?

First work out what fraction would fail on average. If the DC is 15, for every 20 targets 15 should fail on averge.

Now take the square root of the number of targets. If there are 400 targets, the square root is 20. Divide by 10, getting 2.

Add 2 times 1d12, and subtract 2 times 1d12, to the number of targets who save.

This has the same average as "really" rolling 400 d20s and seeing how many beat 15, and also has the same standard deviation at 50-50 chance (and higher at non-50-50). (It has quite different higher order moments)

K=1 should have at least 40 targets. K=2 starts at 200 targets. K=3 at 600, K=4 1200, K=5 2000, K=6 3000, K=7 4000, K=8 5000, K=9 7000, K=10 9000.

If you need to roll a save for more than 10000 targets, perhaps try a different system. ;)