First of all, to model your mechanics in AnyDice, you're going to need to write a function that accepts a sequence (i.e. a parameter tagged with :s
) and pass your Xd6 roll to it. That's the general way in AnyDice to implement arbitrary dice inspection mechanics.
Inside the function (which AnyDice will automatically invoke for every possible Xd6 roll) the parameter will be a normal X element sequence of numbers between 1 and 6 (sorted from highest to lowest), which you can examine in any way you like to calculate the damage.
Next, you'll need some way to select the dice that rolled above Y. AnyDice doesn't have a function for that built in, but you could write one yourself, e.g. something like:
function: remove VALUES:s from SEQUENCE:s {
KEPT: {}
loop N over SEQUENCE {
if !(N = VALUES) {
KEPT: {KEPT, N}
}
}
result: KEPT
}
With this helper function, implementing the first two of your damage calculation mechanics is fairly straightforward:
function: count times highest above Y:n of ROLL:s {
KEPT: [remove {1..Y} from ROLL]
result: #KEPT * 1@KEPT
}
function: count times lowest above Y:n of ROLL:s {
KEPT: [remove {1..Y} from ROLL]
result: #KEPT * (#KEPT)@KEPT
}
The last one is a bit trickier, since we also need some way to calculate the most common value among the remaining rolls. Fortunately we can write a helper function for that too, e.g. like this:
function: most common value in SEQUENCE:s {
MODE: 0
MAX: 0
loop N over SEQUENCE {
COUNT: N = SEQUENCE
if COUNT > MAX {
MODE: N
MAX: COUNT
}
}
result: MODE
}
function: count times most common above Y:n of ROLL:s {
KEPT: [remove {1..Y} from ROLL]
result: #KEPT * [most common value in KEPT]
}
Note that if two values are tied for most common, this function prefers the first one in the sequence — i.e. the highest, using AnyDice's default sorting for dice rolls. We can invert this tie breaking roll by reversing the sequence before passing it to the helper function, like this:
function: count times most common above Y:n of ROLL:s prefer lowest {
KEPT: [remove {1..Y} from ROLL]
result: #KEPT * [most common value in [reverse KEPT]]
}
Now, to compare the different methods, all we need to do is call these functions, e.g. like this:
X: 4
Y: 2
D: d{1:Y, Y+1 .. 6} \ use relabeled die for faster calculation! \
output [count times highest above Y of XdD] named "[X]d6 > [Y], count * highest"
output [count times lowest above Y of XdD] named "[X]d6 > [Y], count * lowest"
output [count times most common above Y of XdD] named "[X]d6 > [Y], count * most common"
output [count times most common above Y of XdD prefer lowest] named "[X]d6 > [Y], count * most common (prefer lowest)"
Note a minor speed optimization I added to this code: since none of the functions care about the specific values of rolls between 1 and Y, I defined a custom die D that looks like a d6 with all sides up to Y relabeled as 1. By using this relabeled die instead of a normal d6 we can tell AnyDice not to waste time enumerating all the possible combinations of those rolls that we're going to throw away anyway. Of course, you can replace XdD in the code above with Xd6 and confirm that it still gives the same results, just a bit slower.
The resulting damage distributions look pretty wonky, so it may be more instructive to just look at the average damage obtained using each method. For this, it can be convenient to run each function for all (meaningful) values of Y from 0 to 5 in a loop, e.g. like this:
X: 4
loop Y over {0..5} {
D: d{1:Y, Y+1 .. 6}
output [count times highest above Y of XdD] named "[X]d6 > [Y], count * highest"
}
loop Y over {0..5} {
D: d{1:Y, Y+1 .. 6}
output [count times lowest above Y of XdD] named "[X]d6 > [Y], count * lowest"
}
loop Y over {0..5} {
D: d{1:Y, Y+1 .. 6}
output [count times most common above Y of XdD] named "[X]d6 > [Y], count * most common"
}
loop Y over {0..5} {
D: d{1:Y, Y+1 .. 6}
output [count times most common above Y of XdD prefer lowest] named "[X]d6 > [Y], count * most common (prefer lowest)"
}
Looking at the results in summary mode, we see some interesting behavior:
For the rule variants using the highest retained roll, or the most common roll with a tie-breaker favoring high rolls, we see the average damage going down as the armor level Y goes up, as expected.
For the variants preferring lower rolls, however, the maximum average damage is actually attained for intermediate values of Y. In hindsight, this makes sense: with low Y, the number of kept dice may be higher, but it's multiplied by the lowest kept roll, which is typically quite low.
Thus, assuming that you in fact want higher Y to result in lower damage on average, I'd recommend going with variant 1 (number of rolls above Y × highest roll), or possibly with variant 3 (number of rolls above Y × most common roll above Y) with ties broken in favor of higher rolls.
Addendum: There's in fact a simpler and more efficient way to implement your first model variant:
function: count times highest above Y:n of ROLL:s {
result: (ROLL > Y) * 1@ROLL
}
This works because the highest retained roll (if any) is always equal to the highest roll in the entire dice pool, i.e. 1@ROLL
, and thus independent of the threshold Y. And if no rolls are retained (i.e. if ROLL > Y
is zero) then the value of the highest roll doesn't matter anyway, since it will be multiplied by zero.
A similar trick can also be used for the second variant:
function: count times lowest above Y:n of ROLL:s {
COUNT: ROLL > Y
result: COUNT * COUNT@ROLL
}
Here, we're relying on the fact that ROLL
is always sorted in descending order, so if COUNT
values in ROLL
are greater than Y, then COUNT@ROLL
is the lowest of those. (If there are no rolls greater than Y, then 0@ROLL
will return something arbitrary — actually zero — but again it doesn't matter, since that value will get multiplied by zero.)
Alas, I can't think of a similar clever trick for your third variant. It would be possible to calculate the result in one loop with no helper functions, e.g. like this:
function: count times most common above Y:n of ROLL:s {
COUNT: ROLL > Y
MODE: 0
MAX: 0
loop I over {1..COUNT} {
N: I@ROLL = ROLL
if N > MAX {
MODE: I@ROLL
MAX: N
}
}
result: COUNT * MODE
}
But I don't think that's any more readable or easier to modify than the version using the helper functions, and it's probably not significantly faster, either.
Efficient computation
The trouble with AnyDice is that it works by enumerating all possible multisets of numbers rolled. While this is faster than enumerating all ordered (equivalently, unsorted) rolls and is convenient in that it gives you the entire rolled sequence at once, the number of possible multisets can still grow quickly as you add dice. Mixing different dice also makes things much more difficult.
Fortunately, it turns out that, as long as you can express the dice mechanic in terms of an incremental calculation where you're told how many dice rolled each number in each pool for one number at a time without carrying too much information from number to number, it's possible to compute the solution more efficiently. It's also possible to handle mixed standard dice without much loss in efficiency. I've implemented this approach in my Icepool Python package. Here's a script:
import icepool
from icepool import d4, d6, d8, d10, d12, EvalPool
class GreenRed(EvalPool):
def next_state(self, state, outcome, green, red):
# State is the number of top-two dice for green and red.
top_green, top_red = state or (0, 0)
# If there are remaining places in the top two...
remaining_top_two = 2 - (top_green + top_red)
if remaining_top_two > 0:
# Compute the number of non-eliminated dice that rolled this outcome.
net = green - red
# Then add them to the winning team's top two.
if net > 0:
top_green += min(net, remaining_top_two)
elif net < 0:
top_red += min(-net, remaining_top_two)
return top_green, top_red
def final_outcome(self, final_state, *pools):
top_green, top_red = final_state
if (top_green > 0) and not (top_red > 0): return 2
elif (top_red > 0) and not (top_green > 0): return 0
else: return 1
def direction(self, *_):
# See outcomes in descending order.
return -1
green_red = GreenRed()
# The argument lists are implicitly cast to pools.
print(green_red.eval([d10, d8], [d6, d8]))
Denominator: 3840
Outcome |
Weight |
Probability |
0 |
265 |
6.901042% |
1 |
2784 |
72.500000% |
2 |
791 |
20.598958% |
We can handle even much larger pools. Here's two entire dice sets (10 dice total) on each side:
print(green_red.eval([d12, d10, d8, d6, d4, d12, d10, d8, d6, d4], [d12, d10, d8, d6, d4, d12, d10, d8, d6, d4]))
Denominator: 281792804290560000
Outcome |
Weight |
Probability |
0 |
67701912081930556 |
24.025423% |
1 |
146388980126698888 |
51.949155% |
2 |
67701912081930556 |
24.025423% |
You can try this in your browser using this JupyterLite notebook. Fair warning, though, I'm currently doing a major revision to the package.
Best Answer
Here's a script using my Icepool Python library. Rather than considering entire sets of rolls at once, Icepool uses the strategy of updating a running total based on how many dice in each pool rolled each number.
Denominator: 1679616
which matches posita's results.
result
contains the distribution of full surviving action sets (a la "player 1 is left with 3 action dice: 6, 3, 3") but the number of possible such sets grows rapidly with number of action dice (e.g. this example results in 252 rows), so I won't reproduce the table here.You can try the script in your browser here.