It's amusing how the value of these random numbers changes smoothly as the bound changes!
let mut rng = StdRng::seed_from_u64(seed);
let number = rng.random_range(0..bound);
As Lemire taught us, there are two ways to reduce a word of random bits into a range:
extract from the little end using the remainder from division, sample % bound
extract from the big end using a long multiply sample * bound and taking the upper word of the result
The latter is usually preferred because multiplication is much faster than division, and because if your random number generator isn't great it's often worse in its low bits. (But there's a caveat if the random numbers come from a hash function, because a common crappy integer hash function is the identity function, which leaves the top bits empty. Fibonacci hashing to the rescue!)
The multiplication method treats the random bits as a fixed point number 0.0 <= sample < 1.0 with the binary point to the left of its most significant bit. The multiplication produces a result twice the width of the multiplicands with the binary point in the middle. (Then you discard the lower half or examine it to remove bias.) So, when the code in the blog post calculates
let ratio = number as f64 / bound as f64;
it's recalculating an approximation of the fixed point random sample, which is why the ratio is always roughly the same.
If you use sample % bound instead then the ratio bounces around randomly as the bound changes even when the sample remains the same.
Another way of looking at it is that the multiplication method uses the upper bits and discards the lower bits of the sample from its result, whereas the remainder method uses the lower bits but folds the upper bits into its result.
fanf | 16 hours ago
It's amusing how the value of these random numbers changes smoothly as the bound changes!
As Lemire taught us, there are two ways to reduce a word of random bits into a range:
sample % boundsample * boundand taking the upper word of the resultThe latter is usually preferred because multiplication is much faster than division, and because if your random number generator isn't great it's often worse in its low bits. (But there's a caveat if the random numbers come from a hash function, because a common crappy integer hash function is the identity function, which leaves the top bits empty. Fibonacci hashing to the rescue!)
The multiplication method treats the random bits as a fixed point number 0.0 <= sample < 1.0 with the binary point to the left of its most significant bit. The multiplication produces a result twice the width of the multiplicands with the binary point in the middle. (Then you discard the lower half or examine it to remove bias.) So, when the code in the blog post calculates
it's recalculating an approximation of the fixed point random sample, which is why the ratio is always roughly the same.
If you use
sample % boundinstead then the ratio bounces around randomly as the bound changes even when the sample remains the same.Another way of looking at it is that the multiplication method uses the upper bits and discards the lower bits of the sample from its result, whereas the remainder method uses the lower bits but folds the upper bits into its result.
tomsmeding | 12 hours ago
No, this gives you >99% chance that their wireguard keys are very close to each other. Which may be because they're the same person! Or not.