Sunday, 31 October 2010

BSDNT - v0.21 PRNGs

In this update we are going to replace the el cheapo random generator in
bsdnt with something of higher quality.

Some time ago, Brian Gladman brought to my attention the fact that on 32
bit machines, the test code for bsdnt actually caused Windows to hang!

The issue required some sleuthing work on Brian's part to track down.
He eventually discovered the cause of the problem, and it was, oddly
enough, the pseudo-random number generator (PRNG) I had used.

Brian suspected the PRNG immediately because of his past experience as a
professional cryptographer. In fact, it turns out that PRNG's of the
kind that I had used, aren't particularly good even if they don't have
bugs!

The particular kind of PRNG I had used is called a linear congruential
PRNG. They start with the PRNG initialised to some random seed value,
n = seed. Then each time they are called, they apply the transformation
n = (n*c1 + c2) % p for some explicit constants c1, c2 and a large
enough "prime" p.

One can take c2 = 0 in the transformation and it is also common to see
p = 2^b for some b (e.g. b = WORD_BITS, and yes, I know p = 2^b is not
usually prime). When p = 2^b it is usually the case that the top half of
the bits output have reasonable random properties, but the bottom half
do not. In this case it is acceptable to stitch two LC PRNG's together to
give the full number of bits.

When p is an actual prime, these PRNG's are called prime modulus linear
congruential PRNG's and they aren't too bad when implemented properly.
They still fail to meet some standards of random quality.

To return a whole word of random bits, one either needs to use a prime
p that is larger than a word, which is usually impractical, or again
stitch the output of two prime modulus LC PRNG's together.

However, one needs to be careful, as the period of the generator is p-1,
so if one is on a 32 bit machine, it doesn't do to use a prime p just
over half the size of a word (the first mistake I made), otherwise the
period is just over 65536. That isn't too good for generating random
values for test code!

But how was my LC PRNG causing Windows to crash!? The reason was that
in some of the test functions we required bignums that didn't overflow
when summed together. This of course depends almost entirely on the top
few bits of the bignums being added together.

The problem was that in the expression n = (n*c1 + c2) % p, I was using
values of c1 and c2 which were not reduced mod p. It turns out that this
is crucial to correct operation. It might seem that the result ends up
being reduced mod p anyway, and indeed it would be if n*c1 fit in a word.
However, because it doesn't you actually get ((n*c1 + c2) % 2^32) % p
which causes the binary representation of the value generated to be quite
regular.

Anyhow, on Windows (and probably on other 32 bit machines) the test code
generates length 90 bignums over and over at some point, looking in vain
to find pairs of such bignums which when added do not overflow. As these
are garbage collected at the end of the test function, memory starts
filling up with the orphaned bignums that are discarded by the test code
as it looks for appropriate values. This eventually overwhelms the heap
allocator on Windows and causes the entire OS to crash!

The problem of writing decent PRNG's has been studied extensively, and one
expert in the subject is George Marsaglia. He famously turned up on a
usenet forum in January of 1999 and dumped not one, but piles of fast, high
quality PRNG's which do not suffer from the problems that other PRNG's do.

Amazingly, many of the PRNG's in common usage today are either written by
George, or based on ones he wrote. So he's some kind of legend!

Anyhow, we will make use of two of his PRNG's, Keep It Simple Stupid (KISS)
and Super KISS (SKISS) and a third PRNG called Mersenne Twister, due to
Makoto Matsumoto and Takuji Nishimura in 1997.

George's generators are in turn based on some simpler PRNG's. He begins by
defining a linear congruential generator, with c1 = 69069 and c2 = 1234567.
This is taken p = mod 2^32 (yes, it's not prime). This has good properties
on its top 16 bits, but not on its bottom 16 bits, and for this reason had
been widely used before George came along. This generator has period 2^32.

Next he defines a pair of multiply with carry (MWC) generators. These are
of the form n = c1*lo(n) + hi(n) where lo(n) is the low 16 bits of n, hi(n)
is the high 16 bits and c1 is an appropriately chosen constant.

He stitches together a pair of these MWC PRNG's mod 2^16 to give 32 random
bits. For simplicity we'll refer to this combined random generator as MWC.
This has a period of about 2^60.

Thirdly he defines a (3-)shift-register generator (SHR3). This views the
value n as a binary vector of 32 bits and applies linear transformations
generated from 32 x 32 matrices L and R = L^T according to
n = n(I + L^17)(I + R^13)(I + L^5) where I is the 32 x 32 identity matrix.

In order to speed things up, special transformations are chosen that can
be efficiently implemented in terms of XOR and shifts. This is called an
Xorshift PRNG. We'll just refer to it as SHR3.

Now given appropriate seed values for each of these PRNG's Marsaglia's
KISS PRNG is defined as MWC ^ CONG + SHR3. This generator passes a whole
slew of tests and has a period of 2^123. In this update we make it the
default random generator for bsdnt.

Super KISS is a random generator defined by George later in 2009. It gives
immense periods by adding together the output of three PRNG's, one with a
massive order. It is basically defined by SKISS = SUPR + CONG + SHR3.

Here, the new generator SUPR is based on a prime p = a*b^r + 1 such that
the order of b mod p has magnitude quite near to p - 1.

It starts with a seeded vector z of length r, all of whose entries are
less than b and an additional value c which is less than a.

One then updates the pair (z, c) by shifting the vector z to the left by
one place and setting the right-most entry to (b - 1) - ((a*z1 + c) mod b)
where z1 is the entry shifted out at the left of z. Then c is set to t/b.

Naturally in practice one uses b = 2^32 so that all the intermediate
reductions mod b are trivial.

As with most generators which have massive periods the "state" held by this
generator is large. It requires data mod p for a multiprecision p.

Note the similarity with the MWC generator except for the "complement" mod
b that occurs. This is called a CMWC (Complemented-Multiply-With-Carry)
generator.

George proposed using the prime p = 640*b^41265+1, where the order of b is
5*2^1320481. The period of the CMWC generator is then greater than
2^1300000.

Of course, at each iteration of the algorithm, 41265 random words are
generated in the vector. Once these are exhausted, the next iteration of
the algorithm is made.

The algorithm SUPR in the definition of SKISS is thus just a simple
array lookup to return one of the words of the vector z. Each time SKISS
is run, the index into the array is increased until all words of the array
are exhausted, at which point the CMWC algorithm is iterated to refill the
array.

We now come to describing the Mersenne twister.

It is based on the concept of a feedback shift register (FSR). An FSR shifts
its value left by 1 bit, feeding at the right some linear combination of the
bits in its original value. The Mersenne twister is conceptually a
generalisation of this.

The difference with the Mersenne twister is that the "feedback" is effected
by a certain "twist". This is effected by applying a "linear transformation"
A of a certain specific form, with multiplication by A having addition
replaced by xor in the matrix multiplication. The twist can be described
more straightforwardly, and we give the more straightforward description
below.

One sets up a Mersenne twister by picking a recurrence degree n, a "middle
word" 1 <= m <= n and a number of bits for a bitmask, 0 <= r <= 32. One
picks these values so that p = 2^(n*w - r) - 1 is a Mersenne prime (hence
the name of this PRNG). Given a vector of bits a = [a0, a1, ..., a{w-1}] of length
w and a sequence x of words of w bits, the Mersenne twister is defined by a
recurrence relation x[k+n] = x[k+m] ^ ((upper(x[k]) | lower(x[k+1])) A)
where upper and lower return the upper w - r and lower r bits of their
operands, and where A is the "twist" spoken of and defined below, in terms of
a. Of course ^ here is the xor operator, not exponentiation. For a vector X of w
bits, XA is given by X>>1 if X[0] == 0 otherwise it is given by (X>>1) ^ a.

Some theory is required to find an A such that the Mersenne twister will have
maximum theoretical period 2^(n*w - r) - 1.

To finish off, the Mersenne twister is usually "tempered". This tempering
simply mangles the bits in a well understood way to iron out some of the
known wrinkles in the MT algorithm.

Only a couple of sets of parameters are in common use for Mersenne twisters.
These are referred to as MT19937 for 32 bit words and MT19937-64 for 64 bit
words.

As with all PRNG's, there is a whole industry around "cracking" these things.
This involves starting with a short sequence of values from a PRNG and
attempting to find the starting constants and seed values.

Obviously, in crytographic applications, there is not much point generating
"secure" keys with a PRNG with a single source of entropy. Even if your key
is generated by multiplying primes of many words in length, if those words
were generated from a PRNG seeded from the current time, it may only take
a few iterations and a good guess as to which PRNG you used, to determine
the constants used in the PRNG and thus your entire key. And that's
irrespective of which constants you chose in your PRNG!

So if you are doing crypto, you need to take additional precautions to
generate secure keys. Just seeding a PRNG from the time probably won't cut
it!

Some PRNG's are more "secure" than others, meaning that knowing a
few output values in a row doesn't give terribly much information about
which values may follow. But if you rely on a PRNG to be secure, you
are essentially betting that because you don't know how to get the
next few values and nor does anyone else that has written about the
subject, then no one at all knows. Of course one needs to ask oneself
if they would tell you if they did.

Another assumption one should never make is that no one has the computing
power to brute force your PRNG.

Some PRNG's are designed for cryptographic applications, and maybe one can
believe that these are "safe" to use, for some definition of safe.

Anyhow, we only care about random testing at this point. In today's update
32 and 64 bit KISS, SKISS and MT PRNG's are added in the directory rand.
Our randword, randinit, and randclear functions are all replaced with
appropriate calls to KISS functions.

There is also an option to change the default PRNG used by bsdnt. Is it my
imagination or does the test code now run faster, even on a 64 bit machine!

At some point we will add some tests of the new PRNG's. These will compare
the outputs with known or published values to check that they are working as
designed for a large number of iterations.

Brian Gladman contributed to this article and also did most of the work
in implementing Marsaglia's PRNG's in bsdnt. The Mersenne twisters were
originally written by Takuji Nishimura and Makoto Matsumoto and made available
under a BSD license. Brian did most of the work in adapting these for bsdnt.

The code for today's update is here: v0.21

Previous article: v0.20 - redzones

Saturday, 30 October 2010

BSDNT - v0.20 redzones

In this update we implement another improvement to the test code in bsdnt. I don't know
what the correct name is, but I call them redzones.

The basic idea is this: suppose you have a function nn_blah say, and it writes to an nn_t
b say. If it writes well beyond the allocated space for b, then almost certainly a
segfault will occur. But what if it only writes a word or two before the beginning or
after the end of the allocated space? Very likely this will cause a segfault only on
some systems, depending on the granularity of the heap allocator and depending
on what other bsdnt data might be in the overwritten space!

So what if we could detect this kind of error? Well, that is what redzones hope to do.
Essentially if an nn_t b is allocated with m words of space, when redzones are turned on
it allocates m + 2C words of space for some small constant C. It then fills the first
and last C words of b with known words of data (usually some recognisable pattern of bits).

When the garbage collector cleans up, it examines the redzones to ensure that they have
not been altered. If they have, they raise an error.

The nn_t b is set to point just after the first C words, which contain the redzone, and in
every other respect act like a normal nn_t. The user needn't know that an extra C words
of data were allocated immediately before and after the length m nn_t they requested.
Nor do they need to be aware of the checking that goes on when the nn_t is finally cleaned
up, that the redzones haven't been touched.

Of course it's nice to be able to turn redzones off sometimes, when testing the library.
Therefore I've added a configure option -noredzones which turns off redzones if they are
not required. This works by setting a #define WANT_REDZONES 0 in config.h. The
memory allocator for nn_t's and the garbage collector both operate differently if redzones
are turned on.

At present, the only way to allocate memory for nn_t's in test code is to use
randoms_of_len, so it is convenient to rewrite this to call a function alloc_redzoned_nn
instead of malloc, and for the garbage collector to call free_redzoned_nn. These new
functions are defined in test.c.

The only difference when WANT_REDZONES is set in config.h is that REDZONE_WORDS, which is defined in test.h is changed from 0 to 4 words (meaning 4 redzone words are to be
allocated at each end of a redzoned nn_t). Having redzones of length 0 is the same as not
having them at all. So this makes the functions easy to write.

Also in test.h REDZONE_BYTE is defined to the hexaecimal byte 0xA which has binary bit
pattern 1010, i.e. alternating one's and zeroes. This is the value that is placed into the
redzones byte-by-byte before the nn_t is used. At the end, when they are cleaned up, the
garbage collector examines the redzones to ensure they are still filled with these bytes.

Fortunately checking redzones does not dramatically slow down our test code, and no new
test failures result. This means it is highly likely that our nn_t functions do not overwrite
their bounds. To check that the new redzones code works, it is a simple matter of mocking
up a broken function which overwrites its bounds. The new code complains loudly as it
should, unless redzones are switched off at configure time.

The code for today's update is here: v0.20

Previous article: v0.19 - asserts
Next article: v0.21 - prngs

Monday, 25 October 2010

BSDNT - v0.19 asserts

About a week ago I got enthused to work on another coding project I've been
wanting to experiment with for a long time. I discovered that it was highly
addictive and I just couldn't put it down. It's also given me some interesting
ideas for a higher level interface to bsdnt. But more on that later when we
start working on it.

Unfortunately in that week of time there have been no bsdnt updates.

Moreover, on the weekend my main computer died (physical hard drive failure).
I pulled out my backup machine and Windows wanted to install 3 months of
"important updates". Naturally this caused the machine to crash, I was unable
to recover to the restore point it set, the startup repair didn't work and
the only solution was a format and reload.

*Fifteen and a half hours* later I had reinstalled Windows and it had finally
finished putting the approximately 165 "important updates" back on!!

Unfortunately all the bsdnt blog articles I had meticulously prepared in
advance were lost. Thus I am regenerating what I can from the diff between
revisions of bsdnt. Sorry if they end up being shorter than past updates.

Fortunately I did not lose any of the code I wrote, as that was backed up in
a git repo on an external server!

Anyhow, in this update we make a very simple change to bsdnt, again in an
attempt to improve the test quality of the library. We add asserts to the
code.

An assert is a check that is made at runtime in live code to test if some
predefined condition holds. If the assert fails, an error message is printed
specifying the line of code where the assert is located and what the
condition was that failed.

Now, I am not personally a great believer in asserts. As they are runtime
checks, they require computing cycles, which is just a no-no for a bignum
library. The other option is to turn them off when not testing code. However,
this simply leads to the asserts rarely being run when they are needed.

The other problem with asserts is that they pollute the code, making the
source files longer and appear more complex.

However, there is one situation where I believe they can be very helpful,
and that is in checking the interface of functions within a library and that
it is being respected both in intra-library calls and by the test code for
the library.
Specifically, assert are useful for checking that valid inputs have been
passed to the functions, e.g. you might have a restriction that a Hensel
modulus be odd. Adding an assert allows you to test that all the moduli
you pass to the function in your test runs are in fact odd.

The main advantage in putting asserts into the code is that it forces you
to think through what all the conditions should be that you assert. In
adding asserts to the code in bsdnt I discovered one function in which the
test code was pushing the code to do things I didn't write it to cover.
This forced me to either rewrite the test, or drop that as a condition (I
think I chose the former for consistency with other related functions in
bsdnt).

Of course we do not want to consume cycles when the library is run by the
end user, and so we make asserts optional. This is done using a configure
switch. By default the macro WANT_ASSERT is set to 0 in a file config.h by
configure. However, if the user passes the option -assert to configure, it
sets the value of this define to 1.

A macro ASSERT is then defined in helper.h which is either an empty macro
in the default case or is an alias for the C assert function if WANT_ASSERT
is set to 1 in config.h.

Of course we have to remember to turn asserts on to run the test code, and
this really highlights their main weakness. As I mentioned, the asserts I
added did clarify the interface, but I don't believe they showed up any
bugs in bsdnt. With this expectation, asserts can be a useful tool.

The code for today's update is here: v0.19

Next article: v0.20 - redzones

Saturday, 16 October 2010

BSDNT - v0.18 printx_word, nn_printx

It is time we improved our test code again. We'll spend a few days updating
things to make improvements in the way we test.

Today's update is quite straightforward. We currently have no way of printing
nn_t's. This is quite inconvenient when it comes to the test code, where
little to no diagnostic information is printed at all. In particular, we
aren't printing out any of the multiple precision integers for examination
when a test fails.

Now, it is actually quite a difficult job to print bignum integers in decimal.
In fact, as far as I can see, one requires a function which allocates temporary
space to efficiently print integers. This is an interesting challenge:
is there an algorithm to convert from binary to decimal and print the result,
with just O(1) temporary space, with any complexity.

I think it may be possible if one allows the input to be destroyed. If so, a
subsidiary question would be to do the same thing without destroying the
input. I doubt that is possible, but I do not have a proof. Of course, to be
practical, we'd require an algorithm which doesn't destroy the input.

To get around this issue, we'll start with a simple nn_printx algorithm,
which will print a bignum in hexadecimal. We also add an nn_printx_short
function which prints the first couple of words of a bignum, an ellipsis and
then the final couple of words. This is useful for large bignums that would
print for screens and screens due to their size. We'll use this in our test
code to prevent printing too much output upon test failure.

Another function we add is an nn_printx_diff function. It accepts two nn_t's
and prints information about the range of words where they differ and prints
the first and last differing word in each case.

There is one tricky aspect to our printing functions however. A word is often
an unsigned long, but on some platforms it will be an unsigned long long. For
this reason, when printing a word, we need to use %lx as the format specifier
on some platforms and %llx on others.

So we need to add a routine which will print a word and abstract away the
format specifier so the caller doesn't have to think about it. The function
we include to do this is caled printx_word. It prints a word without needing
to specify a format specifier.

We add files helper.c/h to bsdnt which will contain routines like this one
which aren't specific to our nn module. A few existing functions and macros
also get moved there. The configure system will automatically look for
architecture specific versions of helper.c, allowing us to override the
definition of the functions in that file on a per architecture basis.

We add the printx_word function to helper.c which can be overridden with
an architecture specific version. On a platform where %llx is required, an
architecture specific version will simply replace the generic version which
uses %lx.

In test.h we add some macros, print_debug and print_debug_diff which use the
stringizing operator to print the names of the variables and then print their
values. The stringizing operator (#) is a preprocessor macro which turns a
macro parameter into a string. In our case, we pass the variable name to the
macro and turn it into a string so that we can print the variable name.

A few modifications to the TEST_START and TEST_END macros in test.h also
allow us to give a unique name to each test which is then printed along with
the iteration at which the test failed. This also uses the stringizing
operator so that the caller of TEST_START can specify the unique name for the
test. It seems difficult to come up with an automatic way of generating
unique test names, so this will have to do.

It would also be a useful thing to have it print the value of the random seed
at the start of a failing iteration too. After we have improved the random
generation code in bsdnt v0.21, perhaps someone would like to try adding this
feature.

We roll out our new diagnostic printing routines to our test code. Of course,
to see any of this new code in action, one has to introduce a bug in one of
the tests so that the new diagnostic code is actually run. I leave it to you
to fiddle around introducing bugs to see that the new test code does actually
print useful diagnostic information.

Later on we'll add a bsdnt_printf function which will be variadic and accept
a format specifier like the C printf function and which will have a
consistent %w for printing a word. This will also make things easier on
Windows, where currently the format specifier will be wrong in many places.
We'll fix this problem in a later update.

The code for today's post is here: v0.18

Previous article: v0.17 - div_hensel
Next article: v0.19 - asserts

Friday, 15 October 2010

BSDNT - v0.17 div_hensel

Now that we have nn_mullow_classical, we can add nn_div_hensel. As explained,
this will take an integer a of length n and divide by an integer d of length
m modulo B^n, returning a quotient q and an "overflow".

The overflow will be two words which agree with the overflow from mullow(q*d).

As per the euclidean division, the dividend a will be destroyed.

The algorithm is somewhat simpler than the euclidean algorithm. If d1 is the
least significant word of d then we use an inverse mod B of d1 (dinv say) and
multiply it by the current word of the dividend being considered (working from
right to left) to get a quotient word q1.

We then subtract q1*d (appropriately shifted) from the dividend. There is no
adjustment to do as the inverse mod B is unique (so long as d is odd).

Any borrows and overflows from the subtractions are accumulated in the two
overflow and returned.

In our test code, we check a few things. Firstly, for an exact division, we
want that the quotient is really the exact quotient of a by d. As the
quotient returned is m words, which may be larger than the actual quotient,
we check that any additional words of q are zero. We do this by normalising q.

The second test we do is for an inexact division. We check that the the
overflow words turn out to be the same as the overflow from mullow(q*d).

Note that if we wish to make Hensel division efficient for doing an exact
division, say of a 2m - 1 by m division, we merely pass m words of the
dividend in instead of all 2m - 1 words, so that the quotient is also m words.

Once again we don't allow an overflow-in to Hensel division. This wouldn't
give us any kind of chaining property anyway.

Instead, we'd have to do a mulhigh(q, d) and subtract that from the high
part of the chain before continuing, and the mulhigh will accept our overflow
words from the low Hensel division.

In fact, we add a chaining test that does precisely this. We do a Hensel
division on the low n words of a chain, subtract a mulhigh from the high
m words of the chain, then compute the high m words of the quotient using
a second Hensel division.

To make our test code work, we add an ODD flag to randoms_of_len so that only
odd divisors are used with Hensel division.

It isn't clear if it makes sense to allow a carry-in at the top of div_hensel
or not. On the one hand, it might seem logical to allow a carry-in on account
of the way mul_classical works. On the other hand, divrem_hensel1 took a
carry-in, but at the bottom. This was for chaining rather than a read-in of
an extra word. We choose the latter convention, as it seems to make more
sense here and stops the code from becoming horribly complex.

The code for today's post is here: v0.17

Previous article: v0.16 - mullow_classical

Thursday, 14 October 2010

BSDNT - v0.16 mullow_classical, mulhigh_classical

The most logical routine to implement next would be Hensel division. Its
main application is in doing exact division.

For that reason, we might want to focus on Hensel division without
remainder first.

This would take an m word integer and divide it by an n word integer,
division proceeding from right to left. Essentially it gives us division
modulo B^m.

However, before we implement this, it makes sense to think about what its
"inverse" operation might be.

If q is the Hensel quotient of a by d mod B^m, then the inverse operation
can be thought of as multiplication of q by d mod B^m.

Hensel division gives an m word quotient q, so this would imply that its
inverse should be a multiplication of {d, n} by {q, m} mod B^m. We might call
this inverse operation mullow, as it returns the low m words of the product
d*q.

However, we need to be careful with this kind of multiplication. We'd also
like to have a mulhigh which returns the high part of the multiplication,
and we'd like the sum of mullow and mulhigh to be the same as a full mul.

However, there is a problem if mullow merely returns the product mod B^m. Any
carries out of mullow will have been lost. Also, all the word by word

multiplications that contribute to the high word of the product mod B^m
will be thrown away.

To rectify the problem we accumulate an "overflow" out of the mullow
corresponding to the sum of all these high words and carries. As this
overflow is essentially the sum of arbitrary words it may take up two words.

Thus, instead of mullow yielding m words it will yield m + 2 words. We'd
like to pass the extra two words as an "overflow-in" to mulhigh, thus the
logical thing is to return these two words from mullow separately from the
product mod B^m itself.

Hensel division will also return two overflow words. After all, what it
essentially does to the dividend is subtract a mullow of the quotient by the
divisor. So, the overflow from Hensel division will be defined as precisely
the overflow from mullow(q*d).

We manage the overflow by accumulating it in an dword_t. However, as we don't
wish the user to have to deal with dword_t's (these are used in our internal
implementations only), we split this dword_t into two separate words at the
end and return them as an array of two words representing the "overflow".

Today we shall only implement mullow and mulhigh. The first of these is a lot
like a full multiplication except that the addmul1's become shorter as the
algorithm proceeds and the carry-out's have to be accumulated in two words, as
explained.

At the same time we implement mulhigh. This takes two "overflow-in" words and
computes the rest of the product, again in a fashion similar to a full
multiplication.

Our test code simply stitches a mullow and mulhigh together to see that the
chain is the same as a full multiplication.

we have to be careful in that if one does an n by n mullow, the mulhigh that
we wish to chain with this must start with an n-1 by 1 multiplication,
not an n by 1, otherwise the sum of the mullow and mulhigh would contain the
cross product of certain terms twice.

We also have to be careful in the case where the full multiplication is only
a single word by a single word. Here the overflow out of the mullow part is
only a single word and there is no mulhigh to speak of. It merely passes
the overflow from the mullow straight through.

Both mullow and mulhigh must accept all nonzero lengths, as per full
multiplication. This causes a few cases to deal with in mulhigh. This doesn't
seem particularly efficient or elegant, but there seems to be little we
can do about that.

An interesting question is what the inverse of mulhigh is. Essentially,
this is our euclidean divapprox.

There's something slightly unsatisfying here though. Recall that the divapprox
algorithm proceeds by subtracting values q1*d' where q1 is the current quotient
word and d' is what is currently left of the divisor. We throw away one word of
the divisor at each iteration until finally we are left with just two words.

It would be wholly more satisfying if we didn't require this extra word of the
divisor throughout. We'd then be working right down to a single word in the
divisor so that we will really have subtracted a mulhigh by the time the
algorithm completes.

Any modification I can think of making to the euclidean division to make this
seem more natural also makes it much less efficient.

Perhaps some further thought will lead to a more satisfying way of thinking about
these things, which isn't also less efficient in practice.

The code for today is here: v0.16

Previous article: v0.15 - divapprox_classical
Next article: v0.17 - div_hensel

Wednesday, 13 October 2010

BSDNT - v0.15 divapprox_classical

During the past few weeks, Brian Gladman has been doing
some tremendous updates, including some random number generators and
making bsdnt work on Windows (MSVC).

We discuss all matters related to bsdnt on our development list:


Anyhow, now for today's update.

We'd now like to implement a variant of our divrem_classical algorithm. This
time we'd like to just return a quotient, with no remainder. The question is,
can this be done in less time than a full divrem?

At first sight, the answer seems to be no. As we saw in the post on the divrem
algorithm, every word of both the dividend and divisor counts, and we need
to keep the dividend completely updated as the algorithm proceeds, otherwise
we will get the wrong quotient.

So, with the exception of perhaps the final update (which is only needed to
determine the remainder), there doesn't seem to be much we can save.

But what if we allowed the quotient to be approximate, say within 1 of the actual

quotient? In fact, let's demand that if q' is our approximate quotient, that |a -

d*q'| < d. In other words, we allow the quotient to be too large by 1, but not

too small by 1.

Ideally, what we would like is to be doing about half the updating work.
Specifically, we'd like to be truncating both the dividend and divisor as we
go.

Ordinarily we start with something like

AAAAAAAAAAAAAAA /
DDDDDDDDD

To get the first quotient word we shift the divisor and pad with zeroes, thus

AAAAAAAAAAAAAAA /
DDDDDDDD000000

After one iteration, a word of the dividend has been removed, and we then
shift the divisor again.

AAAAAAAAAAAAAA /
DDDDDDDD00000

We continue until we have

AAAAAAAAA /
DDDDDDDD

Now there is only one quotient word left. But we notice that we don't use
most of the remaining dividend or the divisor to determine this quotient
word. In fact, we could almost truncate to

AA /
D

What if we had truncated at this same point all along?

In fact, if we truncate so that the final division is a two word by one
word division (here we have to be careful, in that we are talking about the
number of words *after* normalisation), then clearly our quotient could be
out by as much as two on that final division, by what we have said in an
earlier post. That is of course ignoring any accumulated error along the way.

As we don't wish to multiply the entire thing out to see what we have to do
to correct it, it is clear that this amount of truncation is too great.

So let's truncate one further word to the right in both the dividend and
divisor, so that the final division (to get the final quotient word) is a
three word by two word division.

In fact, in the example above, as there will be five quotient words, there
will be five iterations of the algorithm, after which we want two words of
the divisor remaining. So, we will start with

AAAAAAA.A /
DDDDDD.D

(The decimal points I have inserted are arbitrary, and only for notational
purposes in what follows.)

After five iterations, throwing away one more word of the divisor each time,
we'll be left with

AA.A /
D.D

The first thing to notice is that our previous divrem algorithms, with the
adjustments they made as they went, gave the precise quotient given the
data they started with.

The second thing to notice is that truncating both the dividend and the
divisor at the same point, as above, will not yield a quotient that is too
small. In fact, the quotient we end up with will be the same as what we would
have obtained if we had not truncated the dividend at all, and only truncated
the divisor. Additional places in the dividend can't affect the algorithm.

Truncating the divisor, on the other hand, may result in a different quotient
than we would have obtained without truncation. In fact, as we end up
subtracting less at each update than we would if all those words were still
there, we may end up with a quotient which is too large. The divisor may also
divide more times, because of the truncation, than it would have if it had not
been truncated.

However, it is not enough to merely consider how the quotient changes with
truncation in order to see how far we can be out. We'll likely end up with a
very pessimistic estimate if we do this, because we may suppose that the
quotient can be one too large at each iteration, which is not true.

Instead, the quantity to keep track of is the original dividend minus the
product of the full divisor d and the computed quotient q. At the end of the
algorithm, this is the actual remainder we'll end up with, and we'd like to
keep track of how much our *computed* remainder (what's left of the dividend
after the algorithm completes) differs from this actual remainder.

Essentially, we accumulate an error in the computed remainder due to the
truncation.

Clearly, at each iteration, the word after the decimal point in what remains
of the dividend may be completely incorrect. And we may miss a borrow out of
this place into the place before the decimal point. So after n iterations of
the algorithm, the dividend may become too large by n. Of course n.0 is much
smaller than our original (normalised) divisor d (also considered as a decimal
D.DD...).

At the final step of the algorithm, we will have a dividend which is too large
by at most this amount, and we'll be using a divisor which is truncated to
just two words. However, the latter affects the computed remainder by an
amount much less than the original d (if Q is the final quotient word, it is
as though we added q*0.0DDDDDD to our divisor, so that the full divisor would
go Q times where it otherwise would only go Q-1 times).

So these two sources of error only increase the computed value q'*d + r (where
q' is the computed quotient and r is the computed remainder) by an amount less
than d. Thus, the computed quotient q' can be at most one larger than the
actual quotient q.

This is equivalent to the required |a - d*q'| < d.

So it seems that if we truncate out dividend at the start of the algorithm,
and our divisor after each iteration, we can get an approximate quotient q'
within the required bounds.

We'll leave it to another time to describe the usefulness of an algorithm
which computes a quotient which may be out by one. What we will note is that
we've done computations with much smaller integers. It therefore costs us
significantly less time than a full divrem.

In this week's branch we implement this algorithm. In the test code, we check
the required quotient is the same as the one returned by divrem, or at most
one too large.

The trickiest part is ensuring we truncate at the right point. We want to
finish on the last iteration with two words *after* normalisation of the
divisor.

Actually, if we are really smart, we realise that if d does not need
to be shifted by much to normalise it, we can get away with finishing with
just two *unnormalised* words in the divisor. The error will still be much
less than d.

To be safe, if the number of quotient words is to be n, I check if the
leading word of the unnormalised divisor is more than 2*n. If not, too much
normalisation may be required, and I set up the algorithm to finish with
three unnormalised words instead of two. Otherwise it is safe to finish
with two words in the unnormalised divisor.

The algorithm begins by computing the number of words s the divisor needs to
be to start. This is two more than the number of iterations required to
get all but the final quotient word, since we should have two words in the
divisor at this point. If normalisation is going to be a problem, we add one
to this so that we compute the final quotient word with three unnormalised
words in the divisor.

Now the number of words required to start depends only on the size of the
quotient, and thus it may be more than the number of words d actually has.
Thus we begin with the ordinary divrem algorithm until the number of words
required is less than the number of words d actually has.

Now we truncate d to the required number of words and the dividend to one
more than that. The remaining part of the algorithm proceeds in the same
manner, throwing away a divisor word every iteration.

Only one thing can go wrong, and that is the following: because we are
truncating the divisor at each point, we may end up subtracting too little from
the dividend. In fact, what can happen is that the top word of the dividend
does not become zero after we subtract q*d' (where d' is the truncated divisor).

When this happens, the top word of the dividend may be 1 after the subtraction.

We know that the least significant word of the dividend could be completely
wrong, and the next word may be too large by about as many iterations as
we've completed so far.

Thus, in order to fix our overflow, we subtract from the dividend as much as we
need to in order for the overflow to disappear. We don't mind our dividend
being too large, as we adjust for that in the algorithm. But we cannot allow it to
become too small. Thus we must only subtract from the dividend precisely the
amount required to make the overflow vanish.

We can safely subtract away whatever is in the bottom two words of the
dividend, as this is not even enough to remove the overflow. And then we can
subtract 1 from the whole dividend. This must remove the overflow and is
clearly the least we can get away with subtracting to do so.

The code for today's post is here: v0.15

Previous article: v0.14 - divrem_classical

Wednesday, 6 October 2010

BSDNT - v0.14 divrem_classical

It's time to implement our schoolboy division routine. I prefer the name
divrem_classical, in line with the multiplication routines.

This function will take a dividend a, divisor d, carry-in ci and returns
a quotient q and remainder r. We'll also need to pass in a precomputed
inverse to speed up the dword_t by word_t divisions that need to occur.

We are going to implement the first algorithm spoken of in the previous
post, namely the one which uses a single "normalised" word of the divisor.

We need to be careful not to just shift the top word of the divisor so
that it is normalised, but if it has more than one word, shift any high
bits of the second word as well. We want the top WORD_BITS bits of the
divisor, starting with the first nonzero bit.

We can use our macro for doing a dword_t by word_t division to get each
new word of the quotient. We start with the carry-in and the most
significant word of a. The macro will automatically shift these by the
same amount as we shifted the leading words of the divisor.

As per the divrem1 function, we require that the carry-in be such that
the quotient won't overflow. In other words, we assume that if the
divisor d is m words, then the top m words of the dividend including the
carry-in, are reduced mod d.

First we need to check that we are not in the special case where truncating
the dividend and divisor to two and one words respectively would cause
an overflow of the quotient word to be computed. This only happens
when the top word of the dividend equals the top word of the divisor, as
explained in the previous post.

If the truncation would cause an overflow in the quotient, we collect a
quotient word of ~0, as discussed in the previous post. If not, we compute
the quotient using our macro.

After this point, the remainder is computed. We allow the function to
destroy the input a for this purpose. We leave it up to the caller to make
a copy of a and pass it to the function, if this is not desired
behaviour.

We do our adjustment so that the quotient is correct. We need a while loop
for this, as mentioned in the previous article.

Finally we write out the quotient word and read in the next word of the
dividend that remains.

In the test code we use our mul_classical and muladd_classical functions to
check that divrem_classical is indeed the inverse of these functions, with
zero remainder and nonzero remainder respectively.

The code for today's post is here: v0.14

Previous article: divrem discussion

Saturday, 2 October 2010

BSDNT - divrem discussion

Today's post is a discussion only, without accompanying code. This is because
the topic is divrem using the "school boy" algorithm, and this is not as
straightforward as one might imagine. The discussion below is informal, and
may contain errors. Please let me know if so and I can make some adjustments
before releasing the code for the next article, where we implement this.

As a consolation, I have released a failed attempt at using C89 macros to make
even more simplifications to our test code. In particular, I tried to make the
chain tests more "automatic". Unfortunately, this didn't work out. It ended up
making the test code too complicated to use and it was going to be way too much
work to convert all of it over to the new test system.

This test code "simplification" was completely abandoned and will not be merged.
But if you are curious you can see it here: gentest

In particular, take a look at generic.c/h and the test code in t-nn.c.

Anyhow, back to today's discussion:

Firstly, what exactly do we mean by the school boy division algorithm?

Certainly I leaned how to do 39 divide 7 at school, and even 339 divide 7.
But how about 10615 divide 1769?

The first step is to divide 1769 into 1061. It doesn't go, so we grab another
digit. So it becomes 1769 into 10615. I have no idea how many times it goes!

Either I missed something at school when I studied "long division", or I only
think I learned an algorithm suitable for problems like this.

In fact, the only way I know to do this division is trial and error, i.e.
to go through all the likely quotient candidates from one to nine.

Using a calculator I find it goes 6 times.

I think I intuitively know how to proceed from here. We place the digit 6
into our quotient, multiply 1769 by 6 and subtract to form a remainder.

But now we have a problem. What if we are dividing 17699949 into 106150000.
Does it still go 6 times? In fact it does not. But how would I know this
without being able to divide 17699949 into 106150000 in the first place!

So I have two problems. Firstly, if I simply truncate the numerator and
denominator to a certain number of digits, even the first digit of my
quotient may be wrong, and secondly, if I use more digits in the first place
my intiuitive "algorithm" doesn't help. It basically says, guess and refine.
That's all very well, but when my digits are 64 bit words, I may not be so
lucky with the guessing thing.

Now an interesting thing to note in the example above is that no matter how
many digits we add to the dividend, 1769 will always go into the first 5
digits 10615 of the dividend, 6 times (with some remainder), no more, no
less.

To see this, we note that 6*1769 is less than 10615 and 7*1765 is greater.
So the only way we could get 1765 to go in 7 times would be to increase
those first five digits of the dividend. Any following digits are irrelevant
to that first digit, 6, of the quotient.

This is a general feature of integer division, and also applies for the
word based (base B) integer division problem.

However, adding digits to the divisor is not the same, as the example above
shows. In fact 17699949 only goes 5 times into 106150000.

Now the question is, how far off can it be if I truncate the dividend and
divisor? How bad can it get?

We can answer this as follows. Note that 17699949 < 17700000. So 10615xxxx
divided by 1769yyyy is greater than or equal to 10615 divided by 1770, which
happens to be 5. So the answer has to be either 5 or 6 times.

What I have done here is simply add 1 to the first four digits of the
divisor, to get a lower bound on the first digit of the quotient. In this
way, I can get a small range of possible values for the first digit of the
quotient. Then I can simply search through the possibilities to find the
correct quotient digit. This matches my intuition of the long division
algorithm at least.

It seems that *perhaps* a quotient digit obtained in this way will be at
most out by 1. In other words, roughly speaking, if X is the first few digits
of the dividend and Y the appropriate number of digits of the divisor (in the
example above, X = 10615 and Y = 1769), then perhaps X / Y >= X / (Y + 1)
>= (X / Y) - 1, where by division here, I mean integer division. Let's call
this condition R.

Let's suppose now that we are working on integers written as binary words
instead of decimal digits. Let's also suppose that X / Y < B. Let's call
this condition S.

Well, it is easy to generate a counterexample to condition R. Let Y = 2,
X = B. Clearly R is not satisfied, even though S is.

But this seems to only be a problem because Y is so small. So, what if we
constrain Y to be at least B/2?

It turns out that condition R still doesn't hold. A counterexample is
Y=B/2, X = B*Y - 2.

However, I claim that X / (Y + 1) >= (X / Y) - 2 does hold, so long as
condition S holds. Let us call this inequality T.
Clearly there does not exist a counterexample to T with X <= Y.
Let us suppose that for a certain Y >= B/2, there is a counterexample to T.
Clearly if X0 is the first such counterexample, then X0 is a multiple of Y,
but not of Y + 1. Nor is X0 - 1 a multiple of Y + 1 (else X0 - 2 would have
been a counterexample).

It is also clear that every multiple of Y from X0 on is a counterexample, as
the left hand side can never "catch up" again.

Thus we have the following result: if there exists a counterexample to T for
some Y >= B/2 and X < BY, then X = (B - 1)*Y is a counterexample.

Substituting this value of X into T yields that T holds if and only if
(B - 1)*Y / (Y + 1) >= B - 3 for all Y >= B/2. Thus, if we can show that this
last inequality holds for all Y >= B/2 then we have proven T.

Note that this inequality is equivalent to (B - 1)*Y >= (Y + 1)*(B - 3), i.e.
2*Y >= B - 3. However, as Y >= B/2, this holds under our hypotheses.

So putting all of the above together, suppose we are dividing A by D and that
Y >= B/2 is the leading word of the divisor D and B*Y > X >= Y is the first
word or two words of the numerator A (whichever is appropriate). Then if Q0
is the leading word of the quotient Q = A / D, then we have shown that
X / Y >= Q0 >= (X / Y) - 2.

In other words, X / Y can be no more than 2 away from the first word of the
quotient Q = A / D that we are after.

This leads us immediately to an algorithm for computing a quotient of two
multiprecision integers.

(i) Let Y be the first WORD_BITS bits of the divisor D, so that
B > Y >= B/2.

(ii) Let X be the first word or two words (appropriately shifted as per Y) of
A such that B*Y > X >= Y.

(iii) Let Q0 = X/Y.

(iv) Set R = A - D*Q0 * B^m (for the appropriate m), to get a "remainder".
While R < 0, subtract 1 from Q0 and add D * B^m to R.

(v) Write down Q0 as the first word of the quotient A / D and continue on by
replacing A by R and returning to step (i) to compute the next word of the
quotient, etc.

Another algorithm can be derived as follows. Instead of using Y >= B/2 in the
above algorithm, let's choose Y >= B^2/2. A similar argument to the above
shows that X / (Y + 1) >= (X / Y) - 1 for Y >= B^2/2 and B*Y > X >= Y. It boils
down to showing that (B - 1)*Y / (Y + 1) >= B - 2 for Y >= B^2/2, i.e. that
Y >= B - 2, which is clearly satisfied under our hypotheses.

The algorithm is precisely the same, but at step (iv) we can replace the while
statement with an if statement and perform at most one adjustment to our
quotient Q0 and to R.

Now we return to step (v), in which we said that we could just continue
on from step (i) to compute the next word of the quotient Q. If we do this
and set A = R then compute the new X, what we would like is something like
the divrem1 algorithm we implemented, where, (after possibly some kind of
initial iteration that we handled specially), it is always true that the new
X is two words and has its first word reduced mod Y.

However, this does not follow from 0 <= R < D*B^m, and it may be that the
first word of the remainder is equal to Y! This is due to the truncation of
R to get Y.

It is clear from 0 <= R < D*B^m that if X is the first two words of the
remainder that X <= B*Y. So to make the algorithm continue in the way we'd
like, we only have to deal with the special case where X = B*Y.

We know that in the first algorithm above, the next word of the quotient may
be B - 1 or B - 2, since we know already that it is not B. We must multiply
out and check which it is.

In the second algorithm above, where Y >= B^2/2, the only possibility for
the next quotient word is B - 1, as we know it is not B.

In the next article we will look at implementing the first of the two
algorithms above, leveraging the code we already have for computing and using
a precomputed inverse.

Previous article: v0.13 - muladd1, muladd

Friday, 1 October 2010

BSDNT - v0.13 muladd1, muladd

We noted in the last article that multiplication need not take a carry-in
and that it doesn't seem related to the linear functions we have been
implementing.

Instead, we think of multiplication in a different way, as an inverse of
division. We'll soon implement division with remainder, i.e. given a and
d, find q and r such that a = d*q + r, where 0 <= r < d
If d and q are of lengths m and n respectively, then r is of length at most
m and a is either of length m + n or m + n - 1.

Multiplication corresponds to the case where r = 0.

As we will certainly not wish to have to zero pad our division out to
m + n limbs if a is in fact only m + n - 1 limbs, it makes sense that our
division will take a carry-in. For this reason, it made sense for our
multiplication to yield a carry-out, i.e. it will write m + n - 1 limbs
and return an m + n - th limb, which may or may not be 0.

When r is not zero in our division, the inverse operation would take a
value r of n limbs and add d*q to it, returning the result as a value a of
m + n - 1 limbs (and a carry which may or may not be zero). We call this
routine muladd. In the case where r and a happen to be aliased, the result
will be written out, overwriting a.

We first implement nn_muladd1 which is identical to addmul1 except that
muladd1 writes its result out to a location not necessarily coincident with
any of the inputs. In other words, nn_addmul1 is an in-place operation
whereas nn_muladd1 is not.

Next we implement nn_muladd_classical. It takes an argument a of length m
to which it adds the product of b of length m and c of length n. The result
may or may not alias with a.

We also implement a version of the linear and quadratic muladds which writes
out the carry, naming the functions with the nn_s_ prefix, as per the
convention initiated in our nn_linear module.

As for multiplication, we don't allow zero lengths. This dramatically
simplifies the code.

An important application of the muladd function will be in chaining full
multiplication. We'll discuss this when we get to dealing with multiplication
routines with better than quadratic complexity.

Our test code simply checks that muladd is the same as a multiplication and
and an addition.

The github repo for this post is here: v0.13

Previous article: configure and assembly
Next article: divrem discussion