Date: 9 april 2009
Motto: Invito Patre Numeri Verso
From the days of Egyptian hieroglyphs to Indian decimals to 16th century exponentiation our ability to express numbers in a concise manner has evolved:
7555544444444333332222222111111 => 1048576 => 2^20
We have defined a super-exponential function bigE.
The Ackermann numbers are then expressed in bigE as
Ackermann(a) = E(a,a,a)
for Conway's "chained→arrow→notation"
are comparable to those of bigE with one row of parameters.
Our objective is to create the fastest 'natural' algorithm for the construction of big numbers. The only hard criterion is reducibility.
C as well as bigE have some setbacks which can still be improved upon
F for example does a pretty good job.
This last function is definitely the fastest,
because higher parameters (on the right) have most effect,
and the substituted values
(which sort the biggest effect) nibble off ones on the lower (left) end.
In the present article on bigI and bigU notations, instead of substituting parameters, we go one dimension deeper and substitute the entities
1 that make up those numbers.
Introduce the concepts of an entity one
and a natural number
as a row of ones
The negative entity
- = -1
can be used in operator functions
to create new sets of countable numbers such as fractions and algebraic complex numbers.
We won't go into that here, this article is about big natural numbers.
Define a meta-notation for repetition.
Add two numbers by placing series of ones next to each other.
Introduce the concept of substitution of entities
within natural numbers
Introduce the concept of a superator
; that sits
between the parameters of operator functions such as
Generally, to reduce an expression
of bigI to natural number, we repeatedly countdown
z1 by one,
and at each countdown substitute all entities
within the parameters of
with a joker
which is based on the former largest number
X;z and on the dimensionality of the expression.
We mention two variations for initial jokers
J here, and apply the second variation in following formula.
J = J0can be derived from an expression
X;zsimply by removing all its superators, leaving only ones. Though this looks contrived, substitutions are possible without separate calculations (or inserting parentheses).
J = J0 = (X;z)is reduced to natural number before substitution (or remains in parentheses until later). This
Jis a 'one step smaller' number for expressions of a single parameter row.
We describe four variations of joker substitution in bigI here, and apply the second.
Variations III. and IV. work with direct substitution – replacing entities with an unresolved subsentence
Jkcan be reduced to natural number before substitution, and then simply added.
Jkare reduced to natural number (in theory), followed by superators
Jkmay remain an array of parameters
J, followed by superators
(Jkand a virtual closing parenthesis at the end:
Enter the second dimension of bigI's parameter array
and expand previous parameter rows
by replacing entities
1 in previous rows subsequently
which represents a row of
Superator operations are resolved from right to left.
We'll evaluate expressions by rule of maximal precedence,
where right operands separately address each entity in every left operand (not in parentheses) up to the start.
Follows the formula for bigI up to the second dimension, separately substituting every parameter.
We formulate two variations of joker array construction by direct substitution here, and use the first for bigI.
(Natural number substitution of previous jokers replaces
= in the formula below,
but that would slow down the algorithm.)
n-dimensional arrays, where all inner dimension lengths are equal to
J, creating a giant dimension box.
Jm1 := Jm;...... [;#m1 Jm#J]
n-dimensional arrays, where dimension lengths are repeated by the dimension's value. Their shape gets elongated in the higher dimensions — like a very long brush stroke.
Jm1 := Jm;...... [;#m1 Jm#Jm]
As an example we enter the third dimension of bigI and expand previous parameters by replacing every entity
which represents a square of
Choosing substitution variation II. or III. above, we use jokers
to substitute entities
k are put in between.
The principle applied here is to replace the virtual superators
0 in between entities.
Then also every superator
;...[;#m 0<m<k] in between parameters,
should be replaced by a superator of the appropriate maximal length
Abiding by the principle of substitution variation I. we could have kept the existing superators as is.
create extradimensional bigI functions of type
which are constructed similarly to the typeless bigI above.
Apply the following joker construction principles to substitution variations I. or II. or IV. (we apply II.).
For variation III. to be possible without separate calculations (or inserting parentheses) the substitution jokers
must remain without prefixed type-superators.
;...1X;z1 [;#p]has an initial joker
J = J0 = ;...1X;z [;#p]
Jn = ;...Jm1 [;#p] with each Jm1 := Jm;...... [;#m1 Jm#J m<n]
These jokers are to be substituted into bigI expressions, as shown in type-reduction and the general two parameter case.
The evaluation definition in the previous chapter explains the rules to reduce all typed bigI expressions to natural number.
The bigI and bigU numbers in the previous chapters are written using only two characters.
The algorithms we've used to reduce them to single character
1... format are about the fastest we can get given certain esthetic constraints such as algorithmic simplicity.
The whole digit range of binary number notation is covered, introducing the concepts of parameter rows and dimensions and types to illustrate every possible ordering of the number character
1 and its superator
To move one step further and "jump out of the box, into the ocean", we need to introduce a third mathematical character. This new character would typically redefine the strength of the previous two characters, depending on their mutual position in the sequence, and thereby increase the speed of the total algorithm. And similarly to
;... (dimensions and types), multiples of this new character (again depending on position) would increase inherent character strength dramatically.
It would take a lot of effort on a mathematician's part to exploit all the possibilities an extra character gives maximally. In combination with previous characters – how many fundamentally different character sequences are there? What is their natural order? How do we define these sequences to produce the biggest numbers? How do we keep it simple and continue in a certain style – the natural expansion of bigI or bigU (or any other two character algorithm)? Is such a basic style systematically definable or is it a question of trial and error?
After 3 characters comes a 4th. Of course any number of extra characters can be introduced to define faster algorithms to create bigger countable numbers. And every next character will in principle count (or boost the length of) series of previous characters a magnitude faster than previous characters can (applied to themselves).
But you can't have a character to express the length of your list of characters directly – such a character counter cannot be designated with a character from the same character list – logically it would have to qualify as a meta-character. From then on you're talking in a higher order language. Next thing you're introducing meta-characters to count previous meta-characters, creating levels of meta-characters which you can count with a meta-level counter, for which you need... a meta-meta character. This sequence of meta-meta-metas never stops, and the order of your language is lost in oblivion.
Imagine a meta-function of bigX which tells us what the biggest number is we can express in bigX (eg. bigU) with a given number of places and characters. Such a maximum function may remind you of the
which is a Turing machine that starts on an empty tape, runs as long as possible, and eventually halts. The maximal number of characters a Turing machine in a certain
n-state can theoretically write on tape is called its busy beaver number (a function of
n). Those numbers have been proved to be non-computable.
The biggest number expressible in bigU with
2 mathematical characters
a+3 places is
this is proven up to extradimensional vana in next chapter's box in case
(note that wana require a
1 digit less).
Following this example we can define what the maximum numbers expressible in bigU with arbitrary numbers of characters look like, even though we still haven't got a clue what the new characters do and what the best algorithms are to reduce such expressions (and whether
m-character algorithms can be proved to be systematically definable).
It feels a constructive step to state beforehand, that in a bigU notation deploying
m characters fully and the latest character number
m written as
the maximum number expressible on
You might like to drop the "number of characters" parameter and make your max-functions solely dependent on the "number of places", simply because you like single parameter functions. There's a binary workaround for this. We can express our mathematical alphabet in binary bits, similar to a bit-expression in a computer, where
m=2^7 ASCII characters can be expressed in
Suppose the successive versions of bigU always work with binary alphabets and you can employ any number of mathematical characters you like, then the number of characters
we'd best use to create the biggest numbers are
powers of two
(else we have to leave character bits undefined).
Now, because there's always a trade-off between character bits and places, less places means there is not much room for alphabets that write advanced characters with a lot of bits (if you want big numbers).
|n=0||m=1||111||maximum number for a<4|
|n=1||m=2||(000)0111||max for 3<a<8|
|n=11||2<m<5||(00)00101010||max for 7<a<12|
|n=111||4<m<9||000100100100||max for 11<a<16|
|n>11||2^(n-1)<m<2^n+1||0...10......[0#n 0#n- 10...#111]||max for n*4-1<a<(n+1)*4|
Because of such binary trade-offs, a maximum function of bigX can be a function solely of the available number of places.
X enters a new cycle, it is succeeded by a function big
XX whose power is comparable to the maximum functions described above. After the first parameter every big
XX will get a second, counting the first, etc.
Parameters, rows, dimensions, types and optionally characters beyond two – if the corresponding algorithms can be systematically defined for every next character – all these constructs are reducible by the same algorithmic model that applies to the original function big
When this next cycle is completed a new maximum function big
XXX of big
XX follows the same path, etc. Of course, the large number of characters implied by a character counting cycle are not required to describe the cycling model itself.
Further on we start to count the number of
X's in big
X...[X#a], as the primary cycles for a cycle function of bigX, a meta-meta function so to speak, which again is reducible by the same algorithmic blueprint we originally defined for big
Are the functions counted by
X's the first meta-level functions and cycli the second (meta-meta), many more meta-level functions will follow. And irrespective if all those different levels can be represented as parameters in a comprehensive single bigX-style meta-level counting function, all levels can at least be counted, so the numbers they create will be bigger than ever.
Similar cycli lie at the heart of what the introduction of a new character means (in fact we cycled from one parameter to multiple dimensions before). Only if a 3-character cycle mechanism is exhausted fully, a 4-th character cycle comes into question. And only if (the construction of algorithms for cycling) every next character is defined, there can be meta-cycling over the characters as a whole. And after that meta-meta-cycling and the counting of metas, it never stops...
For an algorithm there are
principles and disciples.
Principles are the broad concepts that form algorithms, and disciples are number expressions that can be reduced by those algorithms. Because anything in nature can be represented fully in a binary series and eventually translated to a row of ones, everyone is a disciple.
More and deeper principles allow us to create bigger numbers. The way to invent principles is just to start and see what happens. How about the principle that everything we encounter twice is countable? It's a question of recognizing the one and the two.
×). Somebody else repeated the repetitions themselves (exponentiation
^). Later it appeared those levels of repetition are countable (super-exponentiation
E(a,b,c), where the 3d parameter counts the number of operators. Inspired by this Conway introduced a function where the parameter row could go on indefinitely. Then some big number enthusiasts realised the length of parameter rows is countable inside a multidimensional parameter array.
;to separate dimensions and took it as just a 2nd character after which come many more... But first I had to fill my super-binary notation to the brim with extradimensional types that expand (count) dimensions (superators).
There are four principles at work in the definition of algorithms that write big numbers. Except for a further elaboration on the 4th big principle – reminiscent of Rinzai's guest sees guest – we've met them before.
-(minus one) or
;or a character counter or meta-cycle counter
zis reduced from
v, which is a Guest expression derived directly from (and smaller than) the Host, by counting down a parameter so that reducibility is assured. "Guest invitation" in bigU is maximal, because its initial vana counts down the most insignificant Host parameter by
Vmcan be made multidimensional or extradimensional, etc. so long as their substitution position in the Host expression preserves reducibility.
My experimental wana nesting add-ons are not as fast as character introduction by maximum cycli (see previous chapter), and perhaps overly complicated (see example box).
One cannot count up to infinity, but
Georg Cantor in 1874
invented two different types of infinity: a countable infinity
(counting all natural numbers), which is bigger than any countable number,
and an uncountable infinity (counting all real numbers),
which is bigger than any number that counts "deeper than countable" numbers.
Cantor couldn't find the connection between these two infinities by counting upwards with his countable infinity
So far all the numbers we've created with our algorithms for big numbers and
1 are countable and smaller than infinity.
Only by postulating a first uncountable number
and using it as an entity to seed a super-exponential function bigE
or a super-factorial function bigU,
we can count further into the infinite realm,
the same old way Cantor progressed with his ordinal numbers.
Though Cantor showed his famous
r=2, a special case of
the positioning of numbers inbetween earlier ones can be much more intricate than Cantor suggested, if we'd exploit the whole parameter range of a big number algorithm.
Deeper and down the reals, the cardinality of the continuum must be an order higher than that of the discrete infinite numbers we create by means of the uncountable entity
(and countable entities such as
The only way to make Cantor's uncountable infinity
countable is by postulating a higher type of infinity – an ordering of heavens so to speak.
Starting from the countable finity
and the countable infinity
the next-countable infinity
can be as big as the set
of ω-deep numbers created by bigE or by any big
X... (inverse) iteration of any number of finite or countably infinite entities.
These and the unavoidable next 'Cantor-uncountable'
infinities we can count with infinite number functions of general type
bigXΩ and induction.
This last expression models the first row of parameters of bigUΩ after bigU.
We can continue that model with the same algorithms for dimensions, types and cycli as for bigU, reducing the bigUΩ
parameter array to a natural number
n as usual and finally to an infinity
in an effort to write the UΩ travel-guide for the realm of meta-countable infinities.
When we wake up the next morning the U-subscript
has started franchising the business model of bigU:
As the following parameters of the U-subscripted function
pass through dimensions and cycles, a new infinite number can be postulated that lies beyond its reach. This next order of infinity has its own generator function, which means in effect that
Ω-subscript depths for higher order infinity generator functions has its appeal, but don't think bigX-franchising will end with an infinite number that lies beyond the reach of
That's just a next step "out of the box and into the ocean".
But maybe, if God created
from the void and Cantor did it for natural numbers,
we are now in a position to propose an infinite number beyond compare – unreachable by any algorithm of any number generator
and infinity generator
It lies wholly beyond the methods of induction on ones and omegas and of such induction on models, and is therefore beyond compare.
Perhaps you'd like to call that number
ψ (psi), I don't mind about names.
could be the first zero reincarnate, and
a reincarnation of reincarnations –
the "biggest number" of our time, because I halted.
But what if the finest mathematicians of all time would reincarnate in a big bang bubble with infinite resources and the sole purpose of inventing big numbers, when would they halt? Call that number St. Peter's Subway.