The code of a good solution:
function power(uint256 _baseN, uint256 _baseD, uint32 _expN, uint32 _expD) internal view returns (uint256, uint8) {
assert(_baseN < MAX_NUM);
uint256 baseLog;
uint256 base = _baseN * FIXED_1 / _baseD;
if (base < OPT_LOG_MAX_VAL) {
baseLog = optimalLog(base);
}
else {
baseLog = generalLog(base);
}
uint256 baseLogTimesExp = baseLog * _expN / _expD;
if (baseLogTimesExp < OPT_EXP_MAX_VAL) {
return (optimalExp(baseLogTimesExp), MAX_PRECISION);
}
else {
uint8 precision = findPositionInMaxExpArray(baseLogTimesExp);
return (generalExp(baseLogTimesExp >> (MAX_PRECISION - precision), precision), precision);
}
}
Full working code at: https://github.com/Muhammad-Altabba/solidity-toolbox/blob/master/contracts/FractionalExponents.sol
Some explanation
A floating point number x
can be represented in two numbers: a/b
if x = a/b and y = c/d, then x ^ y = (a/b) ^ (c/d)
The problem
The problem is that if the a code is written as (a/b) ^ (c/d)
, the accuracy of the result would be a mess. Because the division of a/b
and c/d
would smash the floating point.
Going into another try, the expression (a/b) ^ (c/d)
could be evaluated as a ^ (c/d) / b ^ (c/d)
or (a^c + a^(1/d)) / (b^c + b^(1/d))
.
The problem with this is that the powering a
to c
or even to c/d
could easily overflow uint256! Even though, the final resulting value could be a small uint8. (this is explained here)
The Solution
There is an approximation for x ^ y
as follow:
x ^ y = e ^ (log(x) * y)
Actually, the method above depend on this equation to compute the power function. As follow:
(_baseN / _baseD) ^ (_expN /_expD) = e ^ (log(base) * exp)
Where base = _baseN / _baseD
(note that FIXEX_1 that is used in the code, is just used to shift the number of left in order to provide better accuracy of the division.
And exp = _expN /_expD
.
The key point here is calculating the log(base) * exp
before powering to e
, will prevent overflow compared to the early discussed formula the can easily produce overflow: a ^ (c/d) / b ^ (c/d)
or (a^c + a^(1/d)) / (b^c + b^(1/d))
.
However, for optimalLog
vs. generalLog
and optimalExp
vs. generalExp
that is used in the provided code, they is about calculating the log
and e ^
either in an optimal or an approximate calculation.
Thanks leonprou for his valuable comments on the question.
Seeing your requirement I would do :
address[] public addressList;
mapping (address => bool) public userAddr;
function insertAddress(address[] addressUser) public returns (bool) {
// used to check adressUser uniqueness
mapping (address => bool) memory uniq;
for(uint i = 0; i < addressUser.length, i++) {
address addr = addressUser[i];
// check if addressUser is unique
require(uniq[addr] == false);
uniq[addr] = true;
// if addr is not already list
if (userAddr[addr] == false) {
userAddr[addr] = true;
addressList.push(addr);
}
}
return true;
}
Edit:
After seeing another contract you could use the method here and require that all address are sent in increasing order. It is probably less costly in term of gas since less memory allocation.
That would do :
function insertAddress(address[] addressUser) public returns (bool) {
// used to check adressUser uniqueness
address lastAddr = address(0);
for(uint i = 0; i < addressUser.length, i++) {
address addr = addressUser[i];
// check if addressUser is unique
// by forcing all address to be sent
// in increasing order
require(addr > lastAddr);
lastAddr = addr;
// if addr is not already in list
if (userAddr[addr] == false) {
userAddr[addr] = true;
addressList.push(addr);
}
}
return true;
}
Best Answer
Approximate solution using the binomial expansion
The following is a decent, low-cost approximation:
This function computes
k * (1+1/q) ^ N
. So, for example, to compute2500 * 1.01 ^ 137
, we could usefracExp(2500, 100, 137, 8)
. This outputs9769
, which is very close to the real value,9771.657061221898
.Explanation
1. Representing rational numbers with fixed points
The main issue is Ethereum doesn't have floating points. The usual approach to represent real numbers is by using fixed points. That means we multiply everything by a large number and assume there is an implicit dot. For example,
1.002
is represented as1002
. For most arithmetic operations we don't really need floats; examples:1.1 + 1.2 == 11/10 + 12/10 == 23/10 == 2.3
1.1 * 2 == 11/10 * 2 = 22/10 == 2.2
2. Fixed point exponentiation
For exponentiation, though, that approach doesn't work so well. To implement it, we'd need to exponentiate the numerator and the denominator, and then divide. Example:
1.1^75 == (11/10)^75 == (11^75)/(10^75) == 1271
. The problem is that, to get to the result, we needed to compute the intermediate value11^75
, which is larger than2^256
, resulting in an integer overflow. Both operands and the results could fit in a 16-bit uint, yet we overflow the 256-bit word size!A way to avoid the overflow is to implement exponentiation as a loop; at each pass, you multiply the number and re-normalize it so it doesn't get too big. The issue with that approach is that it is linear on the exponent, so it might be prohibitely expensive. There are more efficient exp implementations such as exponentiation by squaring; but I'm not sure that approach allows normalizing the intermediate terms.
3. Avoiding big intermediate values with binomial expansion
Let's look again at an example of the kind computation we're trying to perform:
We can rewrite this as:
Applying the binomial expansion, that becomes:
Since
1^N
is always 1, this is the same as:Notice that each new line contains progressively smaller terms. Due to that, we're able to eliminate a good chunk of those lines without affecting the result considerably. That solution gives us a reasonable approximation. The solution posted here computes the
p
first lines of that series.