Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

designing a suckless bignum library

Name: Anonymous 2015-11-16 22:11

Let's design a suckless bignum library. (I'm not part of suckless though, just curious about replacing GMP).

I researched a bit into algorithms and the rundown is this:
* long multiplication: O(n^2)
* karatsuba O(n^1.5)
* Toom-Cook, fourier transform based methods - even faster but only used for numbers 10k digits+ long. Much more complex.

So we should probably use karatsuba for all multiplications. Squaring can be done a bit faster than multiplying two different numbers sometimes.

Now I suggest programming it in assembly, that gives you access to the carry bit (C doesn't get you that). Of course we will use libc and the normal C calling conventions so that it's a regular C library.

What to do about memory management? e.g. if you want to add two numbers do we need to allocate a new 'number' as long as the largest to write the result into or do it destructively "x <- x + y"? Maybe the library should support both - then a calculator program would figure out the best primitives to use for a given computation.

It might be nice to also support things like (big modulus) modular arithmetic and polynomials. stuff like exponentiation and modular inverses have interesting algorithms.

What other integer operations would we want? I don't really want to do anything with arb. prec. real numbers - arithmetic with rationals could be done though.

Name: Anonymous 2015-11-19 17:31

>>25
You can add CPU-specific AVX instructions or whatever as a general toolkit for the compiler to use without having to code entire routines in asm. Lisp lets you hone Lisp to your problem, instead of fighting against the C compiler's fixed assumptions.

C has this too. It's called ``inline assembly''. Your inability to see the parallels between these C constructs and LISP makes me want to shake my head. Sure, LISP perhaps does it better, but since ``suckless'' more or less sticks to POSIX standards and the UNIX way, the project, by definition, is best implemented in C.

Bullshit. Type inference carries a long way, and the language passes a higher level of abstraction to the compiler for optimization than C, with a billion times less undefined behavior keeping it cleaner. And if you're talking about high-level human optimization of algorithms and style, that's the same in every single language.

There isn't always a one to one mapping from a form to a sequence of instructions, especially on modern RISC and CISC CPUs with vector instruction sets, out-of-order execution, multi-level instruction and data memory caches, etc.

Finding the optimal sequence of instructions can be a non-deterministic polynomial time operation, and it becomes very expensive as the size of a sequence grows. Compilers take a lot of short-cuts in their code generators.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List