#
*“On the shoulders of giant numbers”*

http://www.iteror.org/big/book/ch2/ch2_4.html
big**Ψ**

http://www.iteror.org/big/book/ch2/ch2_4.html

## Ψ.2. Up to the Ackermann numbers

#####
chapter 2.4, *edit 0.2.9*

published: *2011-01-07*

updated: *2011-12-30*

### # 2.4. Big number bang

The tao that can be told is not the eternal Tao.

The name that can be named is not the eternal Name.

The unnameable is the origin of Heaven and Earth.

When it carries a name it is the mother of 10,000 pieces.

– Tao Te Ching1.

#### §2.4.1. Distance in number space

Numbers we frequently use such as
`1000`, `1024`, `3600`, `5040`
are constructions, barely recognizable as the lengths of units one
`1..`

we have to reduce them to to put them in order.

For a mathematician the given numbers already belong to separate areas of application
– any operation in between is suspect, for example subtracting
`2^10` from `7!` indeed looks awkward.

Just like the difference between two numbers `a`

and `b`

is defined as the number `b-a` of units `1`

to add to `a`

,
we can create a measure of the algorithmic distance
between two or more numbers.
A natural notion of distance would be the minimal total size
`r`

of the signs needed to create
(the larger) number _{si}..`b`

given (a smaller) number `a`

and its size `r`

(or in case of multiple _{a}`a`

the sizes _{i}`r`

)
where we suggest each _{ai}`r`

by default._{x}=1

For example the distance between `a=1024` and `b=5040`
counting the applied characters `{a,*,1}` as `1`

,
comes to a total of `23` signs for `((5×2)^3+4)×4+a`
as written in the format below, evaluated from the top down.

By comparison `5040 =` `(5^4+5)×(4×2)`
has a minimum of `24` algorithmic signs (starting from scratch).

11111*11

**111

** ** ** **
1111

** ** ** **
*1111

`{@#23 a=1024}`
** ** ** **
a

11111**1111

** ** ** **
11111

** ** ** **
*1111*11

** ** ** **
`{@#24}`

Note that the
sign `@`

represents_{ } any allowed character
and that we simply count all chars.

The axiomatic way of constructing all natural numbers
`n`

by repeatedly adding one
seems alien to the way we deal with quantities in real life._{m1} = n_{m}1_{ }
To measure the algorithmic size
of a number it might be a better idea to take its personal history into account.
How did it come to be?_{ }

As in astrophysics the history of the space of numbers starts with a loud bang!
That moment `1`

was created from the void.
Starting from nothing the unit `1`

is of a totally different order…

The next six numbers are revelations in their own right,
each appending a unit `1`

to the previous number.
But the instant God created *7* =`1111111`

we believe *8* =`1111*11`

and *9* =`111*111`

were also formed,
while all three numbers require `{@#7}` seven characters minimum.

Continue with

the smallest number written in *10* = 11111*11`{@#8}` eight chars,
and

expressed in *11* = `111*111`11`{@#9}` nine.
That is if we agree that brackets (or-and earlier numbers) come in for free,
or that number strings may occupy two dimensions
(formatted as in the examples above).

We call numbers like `10` and `11` difficult

because they are the smallest numbers that require a certain minimum number of
`1`

and `*`

star bits to be formed.
Strictly `7` and the numbers before classify as
*difficult numbers* too.

Our binary star character counting system becomes interesting at `16`
which is the first number best expressed in `{@#8}` chars by a *power*
(either `4**2`

or `2**4`

or tetrating `2***3`

).

The power of *27* =`111**111`

`{@#8}`
is already such that

is best formed by *31*`27+4`
`{@# 12}` and this

`27`will continue to be a popular factor in the numbers to come (find programs and data on our old weblog).

Powers and
superpowers
consist of relatively few chars and lift the construction load off
of neighbouring numbers.
In such an *easy number* neighbourhood (star bits *low*)
the required number of chars `{@#r _{n}}`
slowly merges towards the average expected number

`r`_{i}../n `{r`_{i}#n}

as we let `n`

grow.*Difficult numbers*on the other hand have a relationship (a star bits

*high*) like that of prime numbers.

For a table that lists these elusive

*difficult number*constructs up to 2 million click here!

construct | chars | construction |
---|---|---|

1 | 1 | 1 |

2 | 2 | 1+1 |

3 | 3 | 2+1 |

4 | 4 | 2+2 |

5 | 5 | 3+2 |

6 | 6 | 3+3 |

7 | 7 | 4+3 |

10 | 8 | 5*2 |

11 | 9 | 9+2 = 3*3+2 |

14 | 10 | 7*2 |

19 | 11 | 16+3 = 4^2+3 |

22 | 12 | 11*2 = (3*3+2)*2 |

23 | 13 | 20+3 = 5*4+3 |

41 | 14 | 40+1 = 5*4*2+1 |

43 | 15 | 42+1 = 7*6+1 |

71 | 16 | 64+7 = 4^3+7 |

92 | 17 | 46*2 = (5*3*3+1)*2 |

107 | 18 | 104+3 = (5^2+1)*4+3 |

178 | 19 | 89*2 = (3^4+4*2)*2 |

179 | 20 | 177+2 = ((3^3+2)*2+1)*3+2 |

214 | 21 | 107*2 = ((5^2+1)*4+3)*2 |

428 | 22 | 425+3 = 5^2*(4^2+1)+3 |

553 | 23 | 79*7 = ((5^2+1)*3+1)*7 |

623 | 24 | 89*7 = (3^4+4*2)*7 |

863 | 25 | 860+3 = ((4*3+1)^2+3)*5+3 |

1435 | 26 | 205*7 = ((4^3+4)*3+1)*7 |

1438 | 27 | 719*2 = ((4^3+1)*(3*3+2)+4)*2 |

2867 | 28 | 2866+1 = ((6^2+1)^2+4^3)*2+1 |

4534 | 29 | 2267*2 = (3^7+4^2*5)*2 |

5927 | 30 | 5925+2 = ((6^2+1)*2^5+1)*5+2 |

9161 | 31 | 6561+2600 = 3^2^3+(6^4+4)*2 |

10553 | 32 | 10541+12 = (5^3+2)*(3^4+2)+4*3 |

15299 | 33 | 15129+170 = ((3*3+2)^2+2)^2+(4*3+1)^2+1 |

22943 | 34 | 22939+4 = (((2^5+1)^2+3)*3+1)*7+4 |

35519 | 35 | 3229*11 = (5^^2+(5^2+1)*4)*(3*3+2) |

54359 | 36 | 2861*19 = ((4^^2+4)*(3*3+2)+1)*(4^2+3) |

91711 | 37 | 83521+8190 = (4^2+1)^4+(5^3+1)*(4^3+1) |

174143 | 38 | 173889+254 = ((3^4+2)*5+2)^2+(5^3+2)*2 |

218279 | 39 | 218277+2 = ((4^3*3+2)*5^3+3)*3*3+2 |

349235 | 40 | 69847*5 = (((4^5+3)*(4^2+1)+2)*4+3)*5 |

587033 | 41 | 531441+55592 = 3^(4*3)+(((7*3)^3+4)*3+1)*2 |

891647 | 42 | 823543+68104 = 7^^2+2^^4+(2^3^2+1)*5+3 |

1304153 | 43 | 1183744+120409 = (4^3*(4^2+1))^2+(7^3+4)^2 |

1940063 | 44 | 2971*653 = (3^7+(3^3+1)^2)*((6^3+1)*3+2) |

Kolmogorov complexity restricts the difference between adjacent
*difficult numbers* `d`

_{r}`{@#r}`
and `d`

_{r1}`{@#r1}`
so that if `d`

these numbers lie on average a factor _{r}*f = d_{r1}`f<2` apart.
This is because difficult numbers in any non-radix algorithmic system
essentially represent next bounds of randomness,
and radix systems such as the binary number system
(with `0`

and `1`

as digits)
uniquely cover every natural number up to a power of the radix
(up to `2^n`

when `n`

is the number of binary digits)
and therefore cannot be beaten.

For practical purposes the *binary star* system
(with `1`

and `*`

in between) does pretty well on average,
while at the same time it allows for a selection of extremely large numbers
to be concisely written.

This is just one of many possible algorithms of course, but *star bits*
have a natural feel and can be extended with negative `-`

units
to a *triple star*
system to express all finitely generated complex and
super-complex numbers.

#### §2.4.2. Black number holes

The larger the numbers we'll come up with, the bigger the holes in between
where random numbers hide, that cannot be expressed because we lack the
physical resources.
Even if all the quantum information in our universe could be deployed
to write a single integer with a single arithmetical algorithm,
most Big integers would escape the net.

The image this conjures up in the mind of the observer is that of an
*expanding mathematical universe of galaxies of numbers, surrounded and dominated
by dark matter which randomly falls down from infinity…*

The earthly boundary for
random information holes
is at present given by
Google's
server size and time.

Google's current `10^18 bit` will have to miss almost all integers
smaller than a
googolplex
or `10^googol` or `10^(10^100)`
and larger than `(10^100)^(10^32)` this year,
no matter how sophisticated their algorithms.
Here's why:

Define a finite algorithm to be a text (containing expressions with variables),
which is theoretically reducible

to a single number when *non-random*
values are substituted for each variable.
Expressions are theoretically reducible,
when we may work them out instantly in a hypothetical universe with
*infinite resources*.

In a physical space `s`
an algorithm can be applied a `t` number of times,
but in a notational universe the time factor is part of
the number `p` of available places,
so that `s×t = p`.

Let a notational universe with `k` characters and `p` places,
describe a *finite algorithm*.

With `k` characters on `a` places,
no more than `k^a` different algorithms can be defined.

The *resolution* of a number system with `k` characters
is never better than radix `k`,

so in `p-a` places maximally `k^(p-a)`
*unique* non-random values can be expressed.

If all the algorithmic variables are filled in,
the expression is reduced to one number

picked out of a total of *maximally*
`k^(p-a)×k^a = k^p` unique natural numbers.

To avoid doubles altogether, radix `k`
can create *unique* (but smaller) numbers directly and without the fuzz.

State the

of algorithmic oblivion.*Big number hole* theorem

Given free resources of `k` characters and `p` places
and a Big number `m>k^p`

together with its
algorithmic successor
`n`,
then the best estimate `t` of the total of

inexpressible integers hiding in holes

between `m` and `n`
is *t = n-m-k^p ~ n*

To illustrate the theory, with `10^90` bits you can naturally express
any binary number below `2^2^299`.
Suppose you use the first bit as a switch to a Big number algorithm,
then you lost *half* your options `2^(2^299-1)`
and it is too hard to make up for it…
If there are two choices in our universe,
it is that expensive to pick one to apply.

<!- And that is why true Saints refrain from choosing –
they follow the great *Tao*, walking the middle path… :->

At such Big heights dwell number sequences,
packed with dazzling amounts of digital information –
which ordinary man can never touch, for he is only allowed to express the
very *simplest* of the words that go beyond this world.

This reveals the limitations of the operatorial methods you'll meet in this book –
given the resources of our universe the Big numbers we can produce
are nothing but tiny specks of extremely *low information*.
A sobering thought.