This little essay is an exercise in mathematical recreations; I hope you find it amusing!
Most people use base 10 for their number system. Computer people often find base 2, 8, or 16 convenient. But surely, we're missing out.. why not try some really bizarre bases?!
First, a quick definition: in a base B, each position to the left has its digit multiplied by one greater power of B, while each position to the right of the decimal point is multiplied by one less power of B. With that in mind, let's look at a few bases:
You could even cheat further by using two symbols ("0" and "1" standing for zero and one), but this kind of cheating wouldn't help you. Since one to any power is still only one, adding zeros as a placeholder won't help you. For example, "10", "100", and "1000" would all have the same value (one).
xkcd has cartoon about the marvelous powers of 1, which will only make sense if you've seen Charles and Ray Eames' "Powers of Ten" documentary (spoofed by the Simpsons).
Do things get better if the lone symbol represents i? To keep things clear, let's use the symbol "i".. and it turns out the answer is that it doesn't help:
One approach to solving this problem is to use the other cheating approach we mentioned in the discussion about base 1. Basically, let's permit two symbols ("0" and "1", with their traditional meaning). This helps quite a bit; now we can count one, two, three as "1", "10001", "100010001" (using the "1" means one system). Now at least we can represent all whole numbers - though it's not pleasant. One odd thing about this approach is that there are now many ways to represent a number - "10001" and "100000001" both represent the value two. You can even represent a few complex numbers quite easily - traditional "2+i" becomes "10011".
Another solution, which avoids this kind of cheating, is to use a larger absolute value, but still a complex number, for the base. We can, for example, choose 10i as a base. Doing this has truly baroque impacts that are hard to characterize, and you can do even more interesting things by writing "complex" numbers and adding them with "complex" numbers multiplied by i. Multiplying such numbers in particular is bizarre. You could use symbols such as "1", "2", as representing their traditional value, or use them to represent 1i, 2i, and so on; either way they're bizarre.
You can have fractional bases, but those are actually studied in mathematics. High school student Billy Dorminy has even developed an encryption algorithm using fractional bases, in his science project titled "Improper Fractional Base Encryption". Thus, fractional bases are too useful to be considered further in this paper :-).
Donald Knuth's "The Art of Computer Programming", volume 2, contains Chapter 4.1, "Arithmetic"; that has more information than perhaps you wanted to know about implementing arithmetic on computers. I've been told that "Number: From Ahmes to Cantor" by Midhat Gazale, ISBN 0-691-00515-X, Chapter 2 discusses positional number systems in great detail. Everything Gray Code (in gzipped Postscript format) discusses gray code, a different way to use binary digits to represent numbers.
Henry S. Warren, Jr.'s "Hacker's Delight" chapter 12 discusses some unusual bases, including base -2 (with digits 0 or 1), bases -1+i and -1-i (again, digits 0 or 1), and hints at a few others such as base 2i with digits 0, 1, 2, and 3.
In India, I am told that the Sora language has a varying base, e.g., the units are base 12, but the next higher place is base 20. I've since received an email from Richard Engelbrecht-Wiggans who casts some doubt on this; this is suspiciously similar to the pile of pense coins under the old British money system (12 pense to a shilling, 20 shillings for a pound), and perhaps the natives were playing with the linguists. Further independent investigation would be great on this point.
Slightly different - and way more useful - is the Radix 2^51 trick. This is a way to speed up computation on large integers on modern computers. A modern 64-bit computer can easily add two 64-bit numbers. If you wanted to add two 256-bit numbers, you could divide each into four 64-bit numbers, but handling the carry between them makes it slow because that means they have to be done in sequence. By dividing them into five 51-bit numbers and handling carries separately, the overall addition can take less time due to parallelism. I don't know if this counts as a "different base" - but it's worth mentioning.
In short, there's a reason you never saw these before! Hopefully, you found a little fun in this romp through useless bases.
If you enjoyed this article, you might enjoy my article on the Four fours problem or My home page at dwheeler.com.
David A. Wheeler, 2000-09-22; revised 2012-09-08