First off – not the same question as this (which is great). I need the exponent to be fractional as well. Something like 2.5^0.75
Solidity – Efficient Ways to Compute Exponentiation of Fractional Numbers
evmsolidity
Related Solutions
Approximate solution using the binomial expansion
The following is a decent, low-cost approximation:
// Computes `k * (1+1/q) ^ N`, with precision `p`. The higher
// the precision, the higher the gas cost. It should be
// something around the log of `n`. When `p == n`, the
// precision is absolute (sans possible integer overflows). <edit: NOT true, see comments>
// Much smaller values are sufficient to get a great approximation.
function fracExp(uint k, uint q, uint n, uint p) returns (uint) {
uint s = 0;
uint N = 1;
uint B = 1;
for (uint i = 0; i < p; ++i){
s += k * N / B / (q**i);
N = N * (n-i);
B = B * (i+1);
}
return s;
}
This function computes k * (1+1/q) ^ N
. So, for example, to compute 2500 * 1.01 ^ 137
, we could use fracExp(2500, 100, 137, 8)
. This outputs 9769
, which is very close to the real value, 9771.657061221898
.
Explanation
1. Representing rational numbers with fixed points
The main issue is Ethereum doesn't have floating points. The usual approach to represent real numbers is by using fixed points. That means we multiply everything by a large number and assume there is an implicit dot. For example, 1.002
is represented as 1002
. For most arithmetic operations we don't really need floats; examples:
- Addition:
1.1 + 1.2 == 11/10 + 12/10 == 23/10 == 2.3
- Multiplication:
1.1 * 2 == 11/10 * 2 = 22/10 == 2.2
2. Fixed point exponentiation
For exponentiation, though, that approach doesn't work so well. To implement it, we'd need to exponentiate the numerator and the denominator, and then divide. Example: 1.1^75 == (11/10)^75 == (11^75)/(10^75) == 1271
. The problem is that, to get to the result, we needed to compute the intermediate value 11^75
, which is larger than 2^256
, resulting in an integer overflow. Both operands and the results could fit in a 16-bit uint, yet we overflow the 256-bit word size!
A way to avoid the overflow is to implement exponentiation as a loop; at each pass, you multiply the number and re-normalize it so it doesn't get too big. The issue with that approach is that it is linear on the exponent, so it might be prohibitely expensive. There are more efficient exp implementations such as exponentiation by squaring; but I'm not sure that approach allows normalizing the intermediate terms.
3. Avoiding big intermediate values with binomial expansion
Let's look again at an example of the kind computation we're trying to perform:
1.01 ^ 100
We can rewrite this as:
(0.01 + 1) ^ 100
Applying the binomial expansion, that becomes:
1 * 1 ^ 100 * 0.1 ^ 0
+ 100 * 1 ^ 99 * 0.1 ^ 1
+ 100*99/2 * 1 ^ 98 * 0.1 ^ 2
+ 100*99*98/2/3 * 1 ^ 97 * 0.1 ^ 3
+ 100*99*98*97/2/3/4 * 1 ^ 96 * 0.1 ^ 4
...
+ 1 * 1 ^ 0 * 0.1 ^ 100
Since 1^N
is always 1, this is the same as:
1 * 0.1 ^ 0
+ 100 * 0.1 ^ 1
+ 100*99/2 * 0.1 ^ 2
+ 100*99*98/2/3 * 0.1 ^ 3
+ 100*99*98*97/2/3/4 * 0.1 ^ 4
...
+ 1 * 0.1 ^ 100
Notice that each new line contains progressively smaller terms. Due to that, we're able to eliminate a good chunk of those lines without affecting the result considerably. That solution gives us a reasonable approximation. The solution posted here computes the p
first lines of that series.
No, when a revert
occurs, the transaction state changes are thrown away, including any event logs.
This behaviour is defined in the Yellow Paper here:
The A0 bit when w = REVERT is the formal version of what I said above: no logs are recorded in the case of a revert.
If you are running your own node, of course you can do whatever you like. Geth, for example can provide very detailed transaction traces that you can look at and see what happened even for aborted transactions. But none of that will get stored on the main blockchain.
Best Answer
The code of a good solution:
Full working code at: https://github.com/Muhammad-Altabba/solidity-toolbox/blob/master/contracts/FractionalExponents.sol
Some explanation
A floating point number
x
can be represented in two numbers:a/b
The problem
The problem is that if the a code is written as
(a/b) ^ (c/d)
, the accuracy of the result would be a mess. Because the division ofa/b
andc/d
would smash the floating point.Going into another try, the expression
(a/b) ^ (c/d)
could be evaluated asa ^ (c/d) / b ^ (c/d)
or(a^c + a^(1/d)) / (b^c + b^(1/d))
. The problem with this is that the poweringa
toc
or even toc/d
could easily overflow uint256! Even though, the final resulting value could be a small uint8. (this is explained here)The Solution
There is an approximation for
x ^ y
as follow:Actually, the method above depend on this equation to compute the power function. As follow:
Where
base = _baseN / _baseD
(note that FIXEX_1 that is used in the code, is just used to shift the number of left in order to provide better accuracy of the division. Andexp = _expN /_expD
.The key point here is calculating the
log(base) * exp
before powering toe
, will prevent overflow compared to the early discussed formula the can easily produce overflow:a ^ (c/d) / b ^ (c/d)
or(a^c + a^(1/d)) / (b^c + b^(1/d))
.However, for
optimalLog
vs.generalLog
andoptimalExp
vs.generalExp
that is used in the provided code, they is about calculating thelog
ande ^
either in an optimal or an approximate calculation.Thanks leonprou for his valuable comments on the question.