“On the shoulders of giant numbers”
Mathematical Origin of the Real Era
Think before you print.
Save the world's forests!
bigΔ
bigΨ.I. Number operations
Ψ.3. Chained arrows versus subscripting
# 3.1. Subscript mixing
§3.1.1. Mixed majority arrows?
Our
formula
for ^_{R;}
subscripted arrows did not cover mixed subscripts
^_{m;}^_{n;}
{m≠n}
For a sequence of majority arrows with competing unequal subscripts the question is
which one to evaluate next.
We solve this issue by extending the fixity rules for arrow operations –
namely right associative
majority precedence
– to include subscripts.
Majority precedence will apply externally between two subscripted operations,
as well as internally in a single operation, where it chooses
the reduction to apply first.
The right associative order of evaluation is only decisive,
when subscripts are of equal size,
in other cases size determines which operator is evaluated first.
We do not use brackets in subscripts.
Subscripts live in a complicated multicultural environment, and the benefit
from wielding such an expressive tool is increased notational
resolution
(more numbers can be expressed).
We won't cover all the combinations,
but explain the most essential mixed arrows in a few examples.
The smaller arrow in the operator reminds us of the postponed
backslashed
operation – it waits its turn.
= a*_{1;}b11\
a^_{m1;}^_{n;}b {0<n<m1} = a^_{n;}^_{m1;}b = a^_{n;}^_{m;}... {a#b}
After a certain reduction step the companion arrows
^_{n}
put on hold
because of smaller size, will merge with the new, equally subscripted operators
^_{m;} {m=n}
.
Because the mixing of operators resolved by majority precedence doesn't add significantly to the size of the resulting numbers (compared to the value of an operator's coefficients), these should be treated as an obscure variant. In our book of Big numbers the defining rules never allow for mixing of majority operators.
§3.1.2. Mixed minority stars
Also in the definition of *_{R;}
subscripted star operations we extend their fixity rules –
namely right associative minority precedence.
These decide which reduction step to apply
to a combination of mixed star subscripts.
Subscript inequality unfolds as a great tool in the construction of Big numbers.
Given our observation that mixing of unequal
arrow subscripts
^_{R}
is insignificant,
we'll make a
bigO comparison
between the two subscript systems of
minority stars and majority arrows.
The sign <~
means the left expression is
algorithmically close(st!?) to the expression on the right, yet smaller.
a*_{1;}*b = a*_{1;}.. {a#b} <~ a^_{1;}.. {a#b} = a^_{1;}^_{1;}b
a*_{1;}**b = a*_{1;}*.. {a#b} <~ a^_{1;}^_{1;}.. {a#b} = a^_{1;}^_{1;}^_{1;}b
a*_{1;}*_{1;}b = a*_{1;}*^{..}b {*#b} <~ a^_{1;}^{..}b {^_{1;}#b1} <~ a^_{2;}(b+1)
a*_{1;}*_{1;}*b = a*_{1;}*_{1;}.. {a#b} <~ (a+1)^_{2;}.. {(a+1)#b} = (a+1)^_{2;}^_{2;}b
a*_{1;}*_{1;}**b = a*_{1;}*_{1;}*.. {a#b} <~ (a+1)^_{2;}^_{2;}^_{2;}b
a*_{1;}*_{1;}*_{1;}b = a*_{1;}*_{1;}*^{..}b {*#b} <~ (a+1)^_{2;}^{..}b {^_{2;}#b2} <~ a^_{3;}(b+2)
Star subscripts are way ahead – when they put value d=2
arrows need a 2nd parameter e
.
And when stars append the next iterator e
,
arrow subscripts come to the end of their first row.
a*_{2;}*b = a*_{2;}.. {a#b} <~ (a+1)^_{1,1;}.. {(a+1)#b} = (a+1)^_{1,1;}^_{1,1;}b
a*_{2;}*_{1;}b = a*_{2;}*^{..}b {*#b} <~ (a+1)^_{1,1;}^{..}b {^_{1,1;}#b1} <~ a^_{2,1;}(b+1)
a*_{2;}*_{2;}b = a*_{2;}*_{1;}^{..}b {*_{1;}#b} <~ a^_{b,1;}(b+b) <~ a^_{1,2;}(b+1)
a*_{3;}b = a*_{2;}^{..}b {*_{2;}#b} <~ a^_{1,b;}(b+1) <~ a^_{1,1,1;}(b+1)
a*_{1,1;}b = a*_{b;}^{..}b {*_{b;}#b} = a*_{b1;}b <~ a^_{1,..;}(b+1) {_{1}#b1}
The progression of mixed stars versus unmixed arrows
gives us a tool to study similar systems at work on different levels.
Minority stars are comparable to
majority arrows,
yet the stars as defined by the first row of parameters in
bigI
travel on a higher function level past the 4th parameter
(compared to the arrows of bigE).
The general definition of subscripted minority star operations
has a somewhat broader scope.

aS*^{..}b1 {*#c1} =
aS*^{..}(aS*^{..}b) {*#c *#c1}
^{ } == aS*^{.. }... {*#c a#b1}
where S = *_{Tj;;}^{.. }.:. {*_{Tj;;}#k_{j;} *_{Tj;;}^{..}#m}
where T_{j;} = t_{jp},t_{jpi},.. {t_{jpi}#n t_{jq}≠0} and T_{j;} > T_{jd;} {d>0} 
aS*_{1,..,1R;}b {1#+k} =
aS*_{b,..,R;}^{...}b
{b#k *_{b,..,R;}#b}
same S with T_{j;} > 1,..,1R 
aS*_{1R;}^{...}b
{*_{1R;}#c1} =
aS*_{1R;}^{...}*_{R;}^{...}b
{*_{1R;}#c *_{R;}#b}
same S with T_{j;} > 1R
The order of star subscripts *_{T;}
is quite
intuitive,
even if the conditional comparisons T > R
are difficult to define exactly. In the
reduction train
of bigI to natural number this is usually not a problem –
when we start from the evaluation of an expression
a*_{R;}^{..}b {*#c}
all subsequent mixed subscripts will satisfy these conditions.
# 3.2. Pluses in row context
Sunfaced buddhas, moonfaced buddhas.
The three sovereigns and the five emperors. How pale they look…
For 20 years I've had bitter experiences.
I have descended into the Green Dragon's Cave. For whom?
I cannot tell the depth of it.
Anybody here with a clear eye? Do not be careless about this!
– Matsu in The Blue Cliff Records 3.
§3.2.1. Number plus subscripts
The venerable plus character snatched new meaning
when the left associative unary operator
+
of bigA
was defined with a single subscript
a+_{c;}^{...}
{+_{c;}#b}
in
chapter 2.7.
Left unary pluses are ruled by very simple concepts.
As we saw in the 0th dimensional case +
equals 1
. Such pluses function as subscriptable ones.
Another way to look at this +
is as an opening sign for a deeper nesting,
virtually delimited by a closing semicolon,
as in +_{1..;}
{1#c}
where if c=0 the initial +_{;}
is (reduced to) the unit 1
.
Any which view on +
you take –
operator, number or bracket –
its reduction rules and results stay the same.
 a+_{0;}Z = a+Z = a1Z (exchange)
 a+_{c1;}Z = a+_{c;}^{...}Z {+_{c;}#a}
Brackets ()
aren't necessary in the reduction process of left unary pluses,
because the leftmost operators +_{c;}
right after the number operand a
are evaluated first.
But brackets are allowed to help write out bigA expressions.
Now try to fit a 4th coefficient d
into this evaluation system, modelling it after parameter d
of
majority arrows.
 a+_{++;;} = a+_{+1;;} = a+_{1,1;} = a+_{a1;} = a+_{a;}^{...} {+_{a;}#a}
 a+_{++;;}^{...} {+_{++;;}#b1} = (a+_{1,1;})+_{1,1;}^{...} {+_{1,1;}#b}
 a+_{++;...;} {+_{+;}#c1} = a+_{c1,1;} = a+_{c,1;}^{...} {+_{c,1;}#a}
 a+_{++...;;} {+#d1} = a+_{1,d1;} = a+_{a1,d;} = a+_{a,d;}^{...} {+_{a,d;}#a}
And continue to a row of equally subscripted
+_{R;}
operators,
comparable to a row of
majority arrows.
By the last rule below, with k=0,
expansion from single to multiple pluses is expressed.
 a+_{1,..;} {1#k1} = a+_{a,..;}^{...} {a#k +_{a,..;}#a}
 a+_{1R;} = a+_{R;}^{...} {+_{R;}#a}
 a+_{R;}^{...} {+_{R;}#b1} = (a+_{R;})+_{R;}^{...} {+_{R;}#b} (left associativity)
 a+_{1,..,1R;} {1#+k} = a+_{a,..,R;}^{...} {a#k +_{a,..,R;}#a} (upload)
The upload axiom maximizes the number k
of subsequences 1,
because its metarepetition sign is
greedy.
In case _{R} is empty 0
we must apply the generic postcountdown axiom
+_{A,;} = +_{A;}
for upload to work.
With exchange and upload for axioms,
the only possible way to reduce expressions with pluses is
left associatively.
In fact if you're happy to keep the resulting amount in
+
currency,
then the exchange to 1..
type numbers isn't necessary, not even for operand
a
.
All in all it's remarkable that a single nongeneric rewrite axiom is enough
to create some uncanny large numbers.
Show an example how a computer program that writes to a standard line can keep track of distinctive nesting levels.
11111;;; =
1111;1;1;; =
1111;1;;11;1;; =
1111;;11;;11;1;;
=
111111;;11;1;; =
1111111111;1;; =
1... {1#2048}
+++_{++} =
2+_{+1;;} =
2+_{3;} =
2+_{2;}+_{2;} =
2+_{1;}+_{1;}+_{2;}
_{ } =
2+++_{1;}+_{2;} =
4+_{1;}+_{2;} =
4+++++_{2;} = 8+_{2;}
_{ } =
8+_{1;}... {+_{1;}#8}
= 8*2^8 = 2048
So the constructs 1_{;}
and +_{;}
are isomorphic following the rules of operatorial bigA.
Though subscriptable +
seem to function as left unary operators
,
in the left associative context of words these can better be characterized as
unaries (just that) or as extendible units
1
.
§3.2.2. Can pluses be mixed?
The question arises whether we can successfully mix unequal subscripts
into the general definition of these unary plus operators?
Does the possibility of
a+_{m}+_{n} {m≠n}
work out naturally into a tremendously fast operator system,
parallel to the way mixed subscripts for
minority stars
followed up on
superpowers?
Because multiple pluses are left associative, as determined
above,
the answer seems no
.
And indeed, if mixed pluses were ruled by minority precedence
we could reduce such a minority pluses system by simple recursion,
mostly expanding and eventually cashing in pluses from the left.
But when we agree to apply
majority precedence
(with the operator sign
plux
×
),
a larger subscripted ×_{R;}
operator can extend the range from operand a
to include all the smaller subscripted
×_{R';}
operators in between.
Given an expression with mixed pluxes,
the largest subscripted (leftmost) operator is evaluated first
and all the mathematical terms on its left
become subject to repetition (by character duplication).
Because a plux has a single left operand there's a nasty problem –
for the greatest impact the iterator value to count these repetitions
has to be derived from the repeated sequence itself.
To achieve this the left sequence has to be reduced to natural number first.
And then the original unreduced left characters are to be iterated over.
This cannot happen in place, but requires two separate programme states.
Enumeration of states of computation is a complication issue
we will not dive into in this article.
Our focus is on
general recursion
and not
computability,
though we'll miss out on some truly Big numbers.
With unary operators mixed sequences such as in the general formula of mixed
minority stars,
if possible, will have to occupy some secondary space for evaluation.
For instance, a plux at the right end
could still employ the empty character space on its right to perform a calculation,
but this extension of operator sequences to derive an iterator value
essentially uses temporary (nested) right operands.
These alternatives of countable programme states
and virtual extra operands are not truly what operators are about.
Such constructions are better set in the framework of functions
and their operatorial extensions in
part two.
We will forget about majority pluxes
and happily raise bigA on the foundation of
subscriptable pluses,
which look so simple but are to our surprise more powerful than the arrowheads
^..
of operatorial bigE.
This is proved in
chapter 4.2.
chapter 3.3, edit 0.X.X
# 3.3. Arrows quarters
Under 2011 Construction
§3.3.1. Repainting uparrows
The definition of Conway's
chained arrows
is apt to draw its own bow → skip the basic
trio
a→b→c
= a^^{...}b {*#c}
and derive superpowers directly from its axioms for the
first row.
That is, provided the
duo
is defined separately as
exponentiation a→b = a^b
,
which slows the row expansion down by a parameter position,
compared to a more
rigorous
application of the scheme of chained arrows.
We will define a right arrow function using the sign
→
as a parameter separator.
On the operator axis a new algorithm for
what we'll still call uparrows ↑
(not Knuth's, but painted anew) forms the basis.
Our notation applies the convention of majority precedence for arrow operators,
leaving out brackets when possible.
Let's walk through the different forms of uparrows,
following the trail of an expanding row of parameters to the right.

An operator
↑^{..} {#z}
= a
without operands to operate upon is strange, but naturalz = 1.. {#a}

The initial uparrows
a_{;0}↑_{0;}^{..}
are unarya↑^{..}
{↑#z} withz
counting their series. 
Then come the still unsubscripted yet doubly iterable binary uparrows
a↑^{..}y
{↑#z} 
Continue with subscripted uparrows
a↑_{R;}^{..}y
{↑_{R;}#z} which are unmixable, so you can either write for examplea↑_{x;}↑_{x;}↑_{x;}y
ora↑_{x;}↑↑y
ora↑↑↑_{x;}y
as {↑_{x;}#3} is what's intended.
Within a series of uparrows different subscripts will not mix – describe this property in a logical statement.
We've painted the new _{zubscripts}
magenta (not red), to indicate that they shift to the right
–
with every final parameter z
as an operator counter (instead of the fixed red c
)
and penultimate parameter y
as the main iterator
(instead of a fixed b
).
For instance the operation
a↑_{b,x;}^{..}y
{↑_{b,x;}#z}
equals the expression
a→b→x→y→z
.
This isn't just cosmetic –
we'll position those parameters that most increase function speed
(create the Biggest numbers) to the right of function arrays.
In the chained arrows
algorithm the double iterator z
functions as operator counter
and has the greatest impact on the result, since there is no
uploading
of accumulated iteration values.
§3.3.2. Redrawing right arrows
Here we define a right arrow function with
navy coloured separators
→
to distinguish it from chained arrows.
Right arrows
a_{i}→... {a_{i}#n}
are roughly similar to chained arrows, but have three major differences.

Of value importance –
the
a↑_{R;}^{..}
operator counterz
is counted down (virtually) to 0 (and dropped) instead of dropping1
, in support of our general notion of operatorial functions. 
Of value importance –
when penultimate iterator
y
counts down to 0 and collapses, it is revived by the pristine value ofx
. Only one parameter drops off, whereas chained arrows drops both last positions. 
Of parametric importance –
the main axiom of the
R→y→z
scheme applies directly to the first two parametersa→z
past {a≠1 z≠1} the initial cases.
Because
a→b→c =
a^^{..}b {^#c} =
a^_{0;}^{..}b
{^_{0;}#c}
= a→0→b→c
seems to make sense,
in principle any parameter value zero in between could simply
collapse its position. Plausibly even those at the start.
 X→0 = X
 X→0→Z = X→Z
 0→Z = Z
Helped by our collapsing insight, only two axioms – the
initial
and the
main
axiom – suffice to define the row function of right arrows,
together with its zubscripted uparrow operators.
If however stubborn, we cannot accept axioms with zero values
0→
up front as item or in between as iterator,
then we must adapt our rules that count down from 1
to express the same state of affairs. We list them below.

a→1 = a↑
= a1

1→b1 = (0→b1)→b = b1→b
= 1↑^{..} {↑#b1} = b1↑^{..} {↑#b}~> 2*^{..}b3 {*#b}

a1→b1 = (a→b1)→b
= a1↑^{..} {↑#b1} = (a↑^{..})↑^{..} {↑#b1 ↑#b}~> 2*^{..}a3 {*#b}

R→1→z1 =
R→(R→0→z1)→z =
R→(R→z1)→z
{R:a}= a↑^{..}1 {↑#z1} = a↑^{..}(a↑^{..}) {↑#z ↑#z1}
{R:a→x}= a↑_{x;}^{..}1 {↑_{x;}#z1} = a↑_{x;}^{..}(a↑^{..}x) {↑_{x;}#z ↑#z1}
{R:a→Q→x}= a↑_{Q,x;}^{..}1 {↑_{Q,x;}#z1} = a↑_{Q,x;}^{..}(a↑_{Q;}^{..}x) {↑_{Q,x;}#z ↑_{Q;}#z1} 
R→y1→1 =
R→(R→y→1)
== R→(..R→1→1.) {#y#} = R→(..R→1.) {#y1#}
{R:a}= a↑y1 = a↑^{..} {↑#z z:a↑y
}
{R:a→x}= a↑_{x;}y1 = a↑^{..}x {↑#z z:a↑_{x;}y
}
{R:a→Q→x}= a↑_{Q,x;}y1 = a↑_{Q;}^{..}x {↑_{Q;}#z z:a↑_{Q,x;}y
} 
R→y1→z1 =
R→(R→y→z1)→z
== R→(..R→z1.)→z {#y1#}
{R:a}= a↑^{..}y1 {↑#z1} = a↑^{..}a↑^{..}y {↑#z ↑#z1}
{R:a→Q}= a↑_{Q;}^{..}y1 {↑_{Q;}#z1} = a↑_{Q;}^{..}a↑_{Q;}^{..}y {↑_{Q;}#z ↑_{Q;}#z1}
The translation from these new uparrows to
right arrows doesn't look so smooth, but it's workable.
The numbers created by our right arrows →
are a single parameter Bigger than Conway's
chained arrows →
which is not terribly significant on a grand scale,
where parameter rows R
can be stretched to arbitrary length.
If you'd like to see how the above estimate
a→b2
~ 2*^{..}a2 {*#b}
of two parameter superpowers was reached,
click here!

a→2 =
(a→2)→1 =
(a→2)
1
== (0→2)a = a2

a→3 =
(a→3)→2 =
(a→3)
2
== (0→3)(a*2) = (2*a2)

a→4 =
(a→4)→3 ~>
(a→4)
*2
== (0→4)*2^a ~ 2^a2

a→5 =
(a→5)→4 ~>
2^
(a→5) ==2^^a\^
(0→5)~> 2^^a2

a→b {a>1} =
(a→b)→b ~>
2^^{..}
(a→b) {^#b4}
==2*^{..}a\*^{..}
(0→b) {*#b2 *#b3}~> 2*^{..}a2 {*#b}
If we'd fill in the extreme value 1→b1
in the main rule,
our approximation of the two parameter case turns out rather crude.
~> 2*^{..}b3 {*#b}
~ 2*^{..}3 {*#b} =
2*^{..}4 {*#b} =>
b~1
§3.3.3. Mixing down arrows
Under 2011 Construction
Initially arrow expressions are without multiple subscripted operators – the opportunity for mixing didn't occur yet – this is where uparrows and downarrows do not differ in function and result.
 a←y←z = a↓^{..}y {↓#z} = a↑^{..}y {↑#z} = a→y→z

a←x←1←1 =
a↓_{x;}1 =
a←x←(a←x←0←1) =
a↓^{..}x
{↓#z z:
a↓x
}
= a↑^{..}x {↑#z} = a→x→(a→x→1) = a→x→1→1 = a↑_{x;}1 
a←x←y1←1 =
a↓_{x;}y1 =
a←x←(a←x←y←1) =
a↓^{..}x
{↓#z z:
a↓_{x;}y
}
= a↑^{..}x {↑#z} = a→x→z = a↑_{x;}y1 = a→x→y1→1
Obviously the final parameter z
stays unmixed,
because it expresses the number of possible mixes.
Less obviously the penultimate parameter y
is reserved to contain a waitor
subexpression and shouldn't be subscripted or mixed.
Follow the translation rules for revolving arrows around the clock.
 a←R←x←0←z1 = a↓_{R,x;}^{..}0 {↓_{R,x;}#z1} = a↓_{R,x;}^{..}↓_{R;}x {↓_{R,x;}#z}

a←1←1←2 =
a↓_{1;}↓_{1;}1 =
a←1←(a←1←0←2)←1 =
a↓_{1;}a↓_{1;}↓1
= a↑_{1;}a↑↑_{1;}1 = a→1→(a→1→1→2)→1 = a→1→2→2 = a↑↑_{1;}2 
a←x←1←2 =
a↓_{x;}↓_{x;}1 =
a←x←(a←x←0←2)←1 =
a↓_{x;}a↓_{x;}↓x
= a↑_{x;}a↑↑_{x;}x = a→x→(a→x→x→2)→1 = a→x→x1→2 = a↑↑_{x;}x1 
a←x←2←2 =
a↓_{x;}↓_{x;}2 =
a←x←(a←x←1←2)←1 =
a↓_{x;}a↓_{x;}↓_{x;}1
= a→x→(a→x→x1→2)→1 = a→x→x2→2 = a↑↑_{x;}x2 
a←x←y1←2 =
a↓_{x;}↓_{x;}y1 =
a←x←(a←x←y←2)←1 =
a↓_{x;}a↓_{x;}↓_{x;}y
= a→x→(a→x→xy→2)→1 = a→x→xy1→2 = a↑↑_{x;}xy1 
a←x←1←3 =
a↓_{x;}↓_{x;}↓_{x;}1 =
a←x←(a←x←0←3)←2 =
a↓_{x;}↓_{x;}a↓_{x;}↓_{x;}↓x
= a→x→(a→x→xy→3)→1 = a→x→xy1→3 = a↑↑_{x;}xy1 
a←x←1←z1 =
a←x←(a←x←0←z1)←z
= a↓_{x;}^{..}1 {↓_{x;}#z1} = a↓_{x;}^{..}a↓_{x;}^{..}↓x {↓_{x;}#z ↓_{x;}#z}
= a↑_{x;}a↑↑_{x;}x = a→x→(a→x→x→z1)→z = a→x→x1→z1 = a↑↑_{x;}x1 
a←1←y1←z1 =
a↓_{1;}^{..}y1 {↓#z1}
= a↓_{1;}^{..}↓^{..}y1 {↓_{1;}#z ↓#v} where v = a↓_{1;}^{..}y {↓_{1;}#z1} 
a←x1←y1←z1 =
a↓_{x1;}^{..}y1 {↓#z1}
= a↓_{x1;}^{..}↓_{x;}^{..}y1 {↓_{x1;}#z ↓_{x;}#v} where v = a↓_{x1;}^{..}y {↓_{x1;}#z1}  a←R←y1←z1 = a↓_{R;}^{..}y1 {↓#z1} = a↓_{R;}^{..}↓_{R;}^{..}y1 {↓_{R;}#z ↓_{R;}#y1}
More arrow quarters round the clock.
a↓_{1;}↓^{..}b {↓#z} = a↑_{1;}^{..}b {↑_{1;}#z1}
_{ } ^{ } = a→1→b→z1 = a↑_{1;}^{..}a↑_{1;}^{..}b {↑_{1;}#z ↑_{1;}#z1}
a↓_{1;}↓_{1;}b = a←1←b←2 = a←1←(a←1←b←2)←1 = a↓_{1;}a↓_{1;}↓_{1;}b
_{ }^{ } = a←1←b←b1 = a↑_{1;}^{..}b {↑_{1;}#b1}
a←1←b←2 = a↓_{1;}↓_{1;}b = a↓_{1;}↓^{..}b {↓#b}
= a←1←b←b1 = a↑_{1;}^{..}b {↑_{1;}#b1}
# 3.4. Recursive arrow mixing
Under 2011 Construction
§3.4.1. Revolving the axis
There's a beautiful
story
recalled by Richard D. Gill,
how John H. Conway derived a power tower of omegas ω
on a small blackboard, that he rotated 90° clockwise
to create a much larger infinite expression involving epsilons
ε_{εε..}
Using ω→2→3 = ω^^ω = ε_{0}
to express ε_{0}→2→4
is not so impressive as it seems, considering the chained row of omegas
ω→.. {ω#ω}
that must have been on Conway's mind when he invented his chained arrows.
We don't want to spoil Gill's story here. It gains interest when we
find a way to turn clockwise the very arrows Knuth and Conway started with!
This involves a revolution of the arrow's axis.
Of course the first turn is Conway's…
§3.4.2. The next revolution
bigΨ.II. Function sizes
part 2, Work draft
publishes: 2011
§II.0. Summary
In the chapters below you'll find a showcase of 5 different operatorials bigO* – created from their corresponding subscripted operators, which we've introduced in the first part of this paper.

bigI
from our
star
operations
a*_{d}^{...}b {*_{d}#c}

bigE
with
arrow
operations
a^_{d}^{...}b {^_{d}#c}

bigA
from
unary pluses
a+_{c}^{...} {+_{c}#b}

bigU
for the fastorial
a!_{b}

bigY's arrows
a←_{d;}^{...}b {←_{d;}#c}
You've witnessed Conway's
chained arrows
in action before – this is an operatorial function in disguise –
and we'll work it over in the
first row
of bigY.
So you already have an idea of what is coming –
after the basic operations the first row of parameters will be defined
and then we start counting and iterating
over array lengths and dimensions and over
extradimensional types.
The challenge is to iterate in a smart way, to recognize and grasp each opportunity
to move the algorithm higher by feedback of iterators from down below –
thereby creating Big numbers undreamed of before.
In this 2nd part of the book the operatorials pass through 4
different levels of function classes together.
To each operatorial level we'll devote a separate chapter.

bigO on
superpower
level
{K.1}
in the Grzegorczyk hierarchy. 
A first row of parameters on the
{K.2}
AckermannConway level. 
A square array with the many
{K.3}
class functions. 
Multiple array dimensions with the
{K.4}
functions of bigE. 
Extradimensional parameters
{K.5}
covered by bigI level waitors.
Ψ.4. Superpowers of bigO
§4.0.1. Generic rules
In those days the multitude being very great, and having nothing to eat, Jesus called his disciples unto him, and saith unto them, I have compassion on the multitude, because they have now been with me three days, and have nothing to eat: And if I send them away fasting to their own houses, they will faint by the way: for divers of them came from far.
And his disciples answered him,
From whence can a man satisfy these men with bread here in the wilderness?
And he asked them, How many loaves have ye?
And they said, Seven.
And he commanded the people to sit down on the ground: and he took the seven loaves,
and gave thanks, and brake, and gave to his disciples to set before them;
and they did set them before the people.
And they had a few small fishes: and he blessed,
and commanded to set them also before them.
So they did eat, and were filled:
and they took up of the broken meat that was left seven baskets.
And they that had eaten were about four thousand: and he sent them away.
– Mark 8.
An operatorial O
is a container O(T)
with an array of parameters O(a,..,z)
,
where every next term is an iteration counter,
modelled after and expanding on previous terms,
and based on an initial case O(a,1)
.
The term basic refers to the
primitive recursive
function class or the superpower level of construction.
A related subclass is
Kalmár's
elementary,
which covers all functions up to iterations of multiplication.
To evaluate an expression of an operatorial completely,
it is reduced to the smallest possible number of operations on units.
Without other numeric units than 1
the final reduction of an operatorial must always deliver a
natural number.
This is what's meant with reducibility,
and every new rule should preserve it!
Creating operatorials for Big numbers is more of an art than a science.
Our bigE should capture all recursive functions under the hood
(if yours accelerates faster we'll catch up later),
by extrapolating the principles of
natural operators
to higher parameter dimensions and beyond.
We'd like simplicity to be our guide, but this is alas not trivial.
Contrary to popular belief it is not the higher
which rules the lower from above.
Operatorial higher constructs facilitate the lowest to iterate over
all other constructs below the higher.
In a fast algorithm the highest intervention –
characterized by z
countdown – will happen very very rarely.
The postreduction and postcountdown destructors
lay the foundation for all operatorials.
Both rules are marked here in the generic context for bigE.
The postreduction axiom maps a single item (such as a natural number)
to itself.
Postreduction in the context of bigI
can also be incorporated by its
addition rule.
The generic postcountdown axiom is most fundamental for bigO
as it puts a lower limit to all iterations,
but it can perhaps be overruled in a special case of z=0 such as via the
substitution definition
of
multiplication
in bigE.

O(a) = a
so O() = = 0 if a = 1.. {1#a} 
O(X,) = O(X)
so O(X,0) = O(X)
§4.0.2. Basic bigE
Traditionally multiplication is defined by repeated addition –
recursively in two steps.
Notice that the initial step of unity is already contained via
the two generic axioms E(a,0) = a
, so the
zeroarrow
definition of bigE is not a complete fit, because it does not cover
multiplication by zero, which is unreachable as the iterator drops at
b=1.
 E(a,1) = E(a) = a

E(a,b1) =
E(a E(a,b)) {b>0}
== E(a... E(a,1)) {a#b} = E(a...) {a#b1}
= a+... {a#b1} = a(b+1) {0_{^}} = a*b1 {0_{*}}
You might like to leave multiplication untouched, there's
something
to say for that – just drop the single separator and collect the result
E(a,b) = ab = a*b
For bigE we are tempted to choose the substitution definition of
multiplication,
where the value b=0 is allowed to overrule the generic
postcountdown
axiom for once.
Unusual here is that a
functions as an iterator over b
,
while bigE's version of the
lonely separator
completely replaces all units 1
in a
by the value b

E(a,b) = E(1...) {1#a 1:b}
= b+... {b#a} = ba {0_{^}} = b*a
Now how is this suddenly possible?
As the last single separator is dropped, the algorithm of bigE should
select the highest remaining separators to grab its parameters from.
At the lowest level dwell the empty separators
which add the unit parameters
that compose numbers –
so that finger counts
1
become available now… Contrived?
Naturally an iterator countdown is a zerolevel process too –
dropping what seems to be a 0
number of separators ,
in between z1 > z
.
Consider this a
context
where zero space 0_{,}
= 0_{*}
=
We can take the hairsplitting yolk to its amniotic extreme and define
addition in bigE
by assuming separators of size 0
and then
iterating over the two virtually separated values of a
and b
.
The result is the same as without the empty separator,
but that's what you always have with
addition.

E(a,^{...}b1) {,#0} =
E(a1,^{...}b) {,#0}
^{ } == E(a1...,^{...}1) {1#b ,#0}
^{ } = E(a1...) {1#b1}
^{ } = E(ab1) = a+b+1 = ab1 {0_{*}}
Recursive scheme Nº0 for bigE with 1 parameter.
Basic recursive definition of bigE with 3 parameters – first an example (exponentiation), then two (temporary) axioms.

E(a,b1,1) =
E(a,E(a,b,1)) ==
E(a,..a.)
{#b#}
= a... {a#b1 0_{^}} = a^(b+1) = a**b1  E(a,1,c1) = E(a,1,c) == E(a,1) = a

E(a,b1,c1) =
E(a,E(a,b,c1),c) ==
E(a,..a.,c)
{#b#}
= a^^{.. }... {^#c a#b1} = a^^{...}(b+1) {^#c1}
Recursive scheme Nº1 for bigE with 2 to 3 parameters.
 1.1. Root recursion: multiplication [defined by repeated addition]
 1.2. Recursion example: exponentiation <= 1.4 + 0.0
 1.3. Parameter destruction: second iterator collapse
 1.4. Double iteration: superpowers
§4.0.3. Basic bigI
Follows the basic abc of operatorial function bigI,
called the big Iteration
. Compared to the big Expansion
of
bigE,
an advantage of bigI could be that it incorporates addition ab
with its initial pair of parameters.
The definition list of bigI covers
superpowers
with the 3d parameter c
counting the number of stars *..
of the corresponding superpower operation, multiplication represented by a
single star.
Here too, multiplication by zero cannot be reached by reduction
of any expression of bigI.
 I(a,1) = I(a1) = a1
 I(a,b1) = I(I(a,1),b) == I(ab,1) = ab1
 I(a,1,1) = I(a) = a
 I(a,b1,1) = I(a,I(a,b,1)) == a... {a#b1 0_{*}} = a*b1
 I(a,1,c1) = I(a,1,c) {c>0} == I(a,1,1) = a
 I(a,b1,2) = I(a,I(a,b,2),1) == a*... {a#b1} = a**b1
 I(a,b1,c1) = I(a,I(a,b,c1),c) == a*^{.. }... {*#c a#b1}
Recursive scheme Nº1 for bigI with 3 parameters.
 0.1. Initial destruction: postreduction => 1.1
 1.0. Initial case: succession => 1.1

1.1.
Root recursion: addition
[Drop axiom:
lonely separator
I(a,b) = ab
]  1.2. Destruction case: identity => 1.4
 1.3. Primary recursion: multiplication <= 1.5
 1.4. Parameter destruction: double iterator collapse
 1.5b. Recursion example: exponentiation <= 1.5
 1.5. Double iteration: superpowers
We'll continue modelling the operatorial bigI after mixed star operators, expanding the above lists to a row of parameters in chapter 5.3. As it turns out bigI is the big brother of bigE, whose first row is the next construction, building upon the basic parameters defined in this section.
§4.0.4. Axioms of addition
The recursive definition of addition in bigI is a threefold loop –
starting off as an induction step on two parameters,
it drops second parameters b=1 by succession,
which in turn destructs the function
postreduction.
The case of b=0 must then be resolved
postcountdown
(not by
inverse construction)
to completely define addition as we like it
.
so I(a,b) = ab
But addition I(a,b)
is also the natural choice for an initial axiom of bigI.
It is then defined as a rewrite rule — to drop the lonely separator
comma ,
together with the function I
brackets.
As was demonstrated
early
in the book the operator of addition is empty and redundant.
Succession
is surpassed by addition of 1
, the next rule in bigI's definition list.
And the impractical rule of
postreduction
of one unreduced parameter a
is derivable by inverse application of the
postcountdown axiom.
so I() = I(0) = I(0,0) = 0 =
Therefore an addition which drops the lonely separator can function perfectly as a standalone axiom to deal with the final reduction of expressions in a superpower operatorial.
Compare the axiom of addition by
zero separators
in bigE.
We feel that countable separators
are an indispensable property of operatorials,
and that bigE applies this more rigorously than bigI.
However, with the introduction of
waitors
to accommodate mixed minority stars in parameters on the
front row,
the single value c=0
wasted on addition becomes the hallmark of bigI's
row expansion.
§4.0.5. Definition lists and function hierarchy
A function definition is an equation existing of a new
rule A=B
,
defining a reduction from A
to B
.
With often appended a conclusion
B=..Z
derived by iteration ==
of the rule and/or from the preceding definition context.
This context has 3 types of scope –
from generic rules, from earlier lists, or from earlier definitions in the same list.
The four types of list markers (none, circle, disc, square) for definition lists rate the axiomatic strength of each new or rehearsed rule by placing it in the function's definition context (as printed in this article).
 salient = example, nonessential case dependent on a next rule in the same list.
 transient = recursive rule, part of the conclusion of a next rule in the same list.
 provisional = recursive rule, surpassed by a rule in a following definition list.
 fundamental = axiom, recursive rule necessary for all following definitions.
Within the same definition list provisional and fundamental rules
prescribe a certain order of application.
For example, the rule I(a,1,c1)
is listed before superpowers I(a,b1,c1)
,
prescribing that in case b=0 the former rule takes precedence,
even if the latter rule is declared without specifying that
b>0.
When a rule is not fundamental (an axiom),
it will be overruled by a more general definition in a following definition list.
Our job is to abstract new rules, create Bigger numbers
and keep the context expanding forever…
A definition list should cover an (intuitive notion of an) operatorial hierarchy
– in the case of bigI the
*^{...} {*#n}
operators.
The definition lists in this section present the basic context
of superpowers, so that each basic operatorial covers all subclasses
of the primitive recursive Grzegorczyk hierarchy.
Grzegorczyk
Auvergne 2008
# Grzegorczyk's subclasses
The numbertheoretical space occupied by bigE, bigI and
bigA
with 3 parameters is known as the Grzegorczyk hierarchy
{K.1.c}
of
classes
of functions closed under the operation of limited recursion
.
Limited recursion over class {K.1.1}
for example, means that you cannot step from exponentiation to
power towers a^a^a
directly.
From this Grzegorczyk proved that every
primitive recursive
function belongs to a separate subclass c
.
We take addition as level {K.0}
while
Grzegorczyk
calls this class ε_{1}.
Grzegorczyk
in 1953 devised his own primitive recursive function family
to investigate his superexponential hierarchy.
The operatorial function Grz
is essentially his
(except at *notes).

Grz(a,b,) = a1
(*Grz reversed
a <—> b
withc1
as function subscript) 
Grz(a,b,0) = ab
(Grz's addition axiom, here enumerated by
c=0
)  Grz(a,b,1) = a1*b1 (Grz's extra multipligation axiom)

Grz(a,1,c1) = Grz(a1,a1,c)
(*Grz put
0
in parameterb
on the left)  Grz(a,b1,c) = Grz(Grz(a,b,c),b,c) (the Grzegorczyk hierarchy)
This is a 3 parameter, multiple substitution, nonnested, nonalternating,
primitive recursive function.
Suppose we'd let drop the addition and the multipligation axiom,
and then rely on the rest of the definition list
Grz(X) => Gr(X)
to accelerate the cases c=0 and c=1
and thereby our new Gr
expansion.
These functions stay superpowers –
they do not escape from primitive recursion like
Ackermann
did.
 Gr(a,1,0) = Gr(a1,a1,) = a11 = a+2
 Gr(a,b,0) = Gr(Gr(a,b,0),b,0) == a(11**b) = a+2^b
 Gr(a,1,1) = Gr(a1,a1,0) = a1(11**a1) = 2^(a+1)+a+1

Gr(a,b,1) ~ 11***b\**a1 ~ 2^^(b+1)
(say multipligration in
Gr
)  Gr(a,b,c) ~ a*^{...}b {*#c11}
# 4.1. Inverse iteration
Because operatorials build natural numbers 1..
they shun the number 0
from their definitions.
Any awkward expression with an inner parameter value of 0
should be resolved by inverse recursion,
which iterates backwards from an initial value 1
,
as is done here for the familiar 0
values of superpower iterators.
I(a,1,1) =
I(a,I(a,0,1)) =
a I(a,0,1) = a
=>
I(a,0,1) = 0
I(a,1,c1) {c>0} =
I(a,I(a,0,c1),c) =
I(a,1,c)
=>
I(a,0,c) {c>1} = 1
We try out inverse recursion to determine how negative operators work –
many solutions are the same and extract a successor function on b
.
However, there are second solutions, can you see?
I(a,b1) =
I(a,I(a,b),) =
I(a,ab,) = ab1
=>
I(a,b,) = b1
I(a,b1,) =
I(a,I(a,b,),) =
I(a,b1,)
=>
I(a,b,*n) {n>0} = b1
An aesthetic solution, but Peano will have to wait for negative eternity
for the ambiguity about true
negative functions to go away
– it is only the probability
of his successor function that rises…
A few hints for future research:
I(a,b,) = b(a**) = b+1/a or = (ba1)×a+2
I(a,b,*m) {m>2} == b1 = b+1 or ad infinitum
Considering the fundamental role of the successor function
we may well be hallucinating here.
It's alright to define
I(a,b,*n) {n>0} = b1
as an axiom.
There's an overwhelming amount of evidence to show that the operation that
precedes addition need not be difficult –
compare Gr
or Fa_{}
above
and the translation of Rózsa Péter's recursion to bigI format
below.
Because the Big Lebowski calls this
zeration
and thinks highly of it, we've restated its law in double Dutch for all Dudes.
The inverse separator reminds us of the relations between inverse points,
counted by Newton's negative binomial.
E(a,,b) > E(a,b) > E(ab) > E(a,..b) {,#}^{ }= ?
The formula for primary negative iterators b
by application of inverse recursion, is given as an exercise.
# 4.2. From bigA to fractation
There is something unborn, unoriginated, uncreated and unformed.
If there was not, escape from the world of
the born, the originated, the created, the formed, would not be possible.
But since the unborn, unoriginated, uncreated, unformed is real, there exists a way out of this world of the born, the originated, the created and the formed.
Persistent in their meditation, resolute and strong, the wise reach out to conquer Nirvana, the highest happiness.
– Buddha
§4.2.1. Basic bigA and affiliates
Translate the rules for unary pluses to the basic parameters of a new operatorial function bigA.
 A(a,1) = a+ = a1
 A(a,b1) = a+^{...} {+#b1} = ab1
 A(a,1,c1) = a+_{c1;} = a+_{c;}^{...} {+_{c;}#a} = A(a,a,c)
 A(a,2,c1) = A(A(a,a,c),A(a,a,c),c)

A(a,b1,c) =
a+_{c;}^{...} {+_{c;}#b1} =
(a+_{c;})+_{c;}^{...}
{+_{c;}#b}
= A(A(a,1,c),b,c)
We've described the case Rmin(_{0;}a,b,c)
of unbracketed minority stars in
section 2.5.4.
We'll derive a slightly
faster
function Fb = a×^{...}b {×#c}
by unbracketing the function family Fa_{c}
from
section 2.0.2.
It will then become clear that the binary function Fb()
and the unary operatorial A()
are the same, at least in the basic three parameters.
First restate the rules for function family Fa_{c}(a,b)
to Fa(a,b,c)
in operatorial format.
 Fa(a,b1) = Fa(a,b)1 == Fa(a)b1 = ab1
 Fa(a,1,c1) = Fa(a,a,c)

Fa(a,b1,c1) =
a×^{...}b1 {×#c1} =
a×^{...}(a×^{...}b) {×#c ×#c1}
= Fa(a,Fa(a,b,c1),c)
Assume the operator ×
of Fa
and Fb
is ruled by
right minority
precedence.
Unbracket Fa
by removing brackets
( > (_{0}
in the last rule of the above definition.
 Fb(a,b) = ab
 Fb(a,1,c1) = Fb(a,a,c)

Fb(a,b1,c1) =
a×^{...}b1 {×#c1} =
(a×^{...}a)×^{...}b {×#c ×#c1}
= Fb(Fb(a,a,c),b,c1)
Because the two step reduction
A(a,b1,c1) = A(A(a,a,c),b,c1)
matches the main rule of Fb
and the previous two rules are the same,
operatorial A
equals function Fb
in the superpower context.
From this we conclude that for 3 parameters,
bigA increases faster than bigI
on two points – because Fa
starts squaring
sooner than bigI at b=1
and because its unbracketing to function Fb
mostly produces notably bigger numbers,
as it feeds the subexpression to the larger operator (this was explained in
section 2.5.4).
But how fast is bigA approximately compared to bigE?
The dagger sign †
is used to cut calculations short.

A(a,1,1) =
A(a,a) =
aa = a*2
A(a,b,1) == A(.a.,1,1).. {#b#} = a*2^b 
A(a,1,2) =
A(a,a,1)
= a*2^a
A(a,2,2) = A(A(a,1,2),1,2) = a*2^(a*(2^a+1)) ~> 2^2^a †
A(a,3,2) = A(A(a,1,2),2,2) ~> 2^2^(a*2^a) ~> 2^2^2^a †
A(a,b,2) == A(.a.,1,2).. {#b#} ~> 2^^b\^a † 
A(a,1,3) =
A(a,a,2)
~> 2^^a†
A(a,b,3) == A(.a.,1,3).. {#b#} ~> 2^^^b\^^a† 
A(a,1,c) =
A(a,a,c)
~> 2^^{..}a† {^#c}
A(a,b,c) == A(.a.,1,c).. {#b#} ~> 2^^{..}b\^^{..}a† {^#c ^#c}
Unlike
unbracketed stars
versus arrowheads, when c>1
early riser bigA is always slashly ahead of bigE.
This is because:
a^^b < 2^^b\^a
in which the final exponent
a < 2^a
is dominant.
§4.2.2. Halfspeed superpowers
Getting your hands dirty and making mistakes is the practice
of a recursionist.
In
chapter 2.2
we asked if there exists a continuum of algorithms
in the gap between superpower counts of c
of which we've so far only met the integer values.
This is a problem Stephen Wolfram posed
recently.
Natural candidates for the position E(a,b,½)
halfway between multiplication a*b
and powers a^b
for example,
are the binary exponential function a*2^b
or the factorial a!
perhaps.
Today in our algorithmic laboratory
we stumbled across the following approximate evaluations for tryB.
 B(a,b,1) = B(aa,b,1) == (.a.*2).. {#b#} = a*2^b

B(a,2,2) =
B(B(a,a,1),a,1)
= a*2^(a*2)
B(a,b,2) == B(.a.,a,1).. {#b#} = a*2^(a*b) 
B(a,2,3) ==
B(B(a,a,2),a,2)
~> 2^(a^2*2^a^2)
B(a,b,3) == B(.a.,a,2).. {#b#} ~> 2^^b\^(a^2) 
B(a,2,4) ==
B(B(a,a,3),a,3)
~> 2^^a\^(2^^a\^(a^2))
B(a,b,4) == B(.a.,a,3).. {#b#} ~> 2^^(a*b)  B(a,b,5) == B(.a.,a,4).. {#b#} ~> 2^^^b\^^(a^2)
This function tryB expands pairwise and
traverses the superpower hierarchy at half the speed of bigA!
But how does one construct the algorithm?
Should we supply an extra coefficient to store left parameter a
temporarily, or can we retrieve the appropriate variable a
from within the depth of all nests?
The first option would require us to remember these a
somehow as waiting on a particular c
too.
We choose a nested notation, which keeps expressions
substituting for the first parameter unresolved,
until the outer iteration counts down to value b=1.
Nested subexpressions put on hold like this are called waitors.
We use the waitor mechanism to reduce superpower function speed
to a fraction of one half.
But tryB is not dependent on its special rules for waitors w
.
The whole mechanism can be taken care of by the nested recursion
in the conclusion of tryB's
main axiom
(in the last line).
To indicate that tryB's special waitor rules
can be represented alternatively as nested recursion these are marked as
transient
(with left a small circle).
But when you'd insist on having a construction in the simplest of steps,
the waitor rules should be considered
fundamental.
Define our algorithm for tryB.
The substitute inner expressions w
are waitors
B(v,1,c)
nested to arbitrary depth.
Variables a
and v
can be either natural numbers n
or waitors,
where v
is selected for reduction by unnesting.
 B(1..) {1#n} = n
 B(a,1) = a1
 B(a,b) = B(B(a,1),b) == B(.a.,1).. {#b#} = ab
 B(a,1,1) = B(a,a) = a*2

B(a,b,1) =
B(B(a,1,1),b,1)
== B(B(a,b,1),1,1) = B(a,b,1)*2 == a*2^b  B(v,1,c) =: B(v,B(v),c)

B(x,B(w),c) {w:
B[v,1,cd]
c>0} = B(x,B(v),c) {d<2}

B(a,b,c) =
(w,b,c) {w:
B[a,1,c]
} == B(.a.,1,c) {#b#}
= B(B(a,b,c),1,c) =: B(B(a,b,c),a,c)
== B(B(B(a,b,c),a,c),B(a,b,c),c)
== B(.w.,a,c).. {#b#} = B(.a.,a,c).. {#b#}
This fractional waitor algorithm has the following features.

A waitor is a subexpression in first parameter position
that has an inner parameter value b=1
with an iterable (noninitial) value in the outside expression's parameter
c
(here c>1 is iterable).
Waitorsw
cannot be resolved in place, instead they must wait for the outer expression to be iterated down to b=1. We've talked about the application of outside precedence to functions before. 
There's an
initial rule
at a fixed low value of
c
(here at c=0 and c=1) which resolves inner as well as outer expressionsB(v,1,c)
directly – unconditionally and without unnesting (unlike waitors). 
A
selection rule
which cannot be applied in waitor subexpressions,
as indicated by the special equal
=:
sign. This postpones the reduction of nested waitors until the outer expression is resolved.
Officially
each waitorw
must wait for the value b=1 to occur in its outer expression, so that it is selected with the guarantee that its inner parameterc
is still intact for comparison by the next rule. 
A waitor
reduction fork
which is conditionally evaluated, dependent on the difference
d
between the inner and outer value ofc
. It either selects the next inner first parameter or just unselects the waitor so it can be reduced separately as a normal expressionB
to become the next iterator.
If d<f (tryB has f=2 and d<2 means d=1) this reduction rule digs up the inner first parameter from the nested waitor. For nested subexpressionsw = B(.a.,1,c)..
the reduction is recursive:
B(w,1,c) =: B(w,B(w),c) == B(w,a,c)
Else if d≥f the waiting subexpression is substituted directly in iterator position, where the exwaitor is reduced with priority, so it can subsequently be counted down by the main rule. This branch works like the more familiar superpower mechanism we met in bigA. 
A main
introduction rule
which substitutes first parameter
a
for a waitor subexpression as it counts down parameterb
. During the course of the iteration overb
waitors are nested to depthb

An
unofficial
short cut, overruling the default evaluation direction (from right to left).
Further on in the reduction of fractional superpowers (when the outerc
is counted down so that d=f) waitors can't be reduced by unnesting any more – essentially the waiting stops. Then we can decide to resolve these subexpressions as normal fractations, even when substituted in first parameter position.
In tryB the original parametera
nested in the waitor becomes unreachable as soon as d=2 and it does no harm to resolve those exwaitors, like we did in the last line of tryB's main rule.
It is sometimes difficult to realize we are still dealing with numbers here. For two examples of step by step evaluation of numerical expressions in tryB, click here!
B(3,1,3) =:
B(3,B(3),2) =
B(3,3,2) =
B(w,2,2) {w:B[3,1,2]
}
= B(w_{1},1,2) {w_{1}:B[w,1,2]
} =:
B(w_{1},B(w_{1}),1) =
B(w_{1},B(w),1) =
B(w_{1},3,1) =
B(w_{2},2,1)
{w_{2}:B[w_{1},1,1]
} =
B(w_{3},1,1)
{w_{3}:B[w_{2},1,1]
}
= B(w_{3},w_{3}) =
w_{3}*2 =
w_{2}*4 =
w_{1}*2^3 =
w*2^(3*2) =
3*2^3^2 = 1536.
B(3,2,3) =
B(w,1,3) {w:B[3,1,3]
} =:
B(w,B(w),2) =
B(w,3,2)
= B(w_{1},2,2)
{w_{1}:B[w,1,2]
} =
B(w_{2},1,2)
{w_{2}:B[w_{1},1,2)
} =:
B(w_{2},B(w_{2}),1) =
B(w_{2},B(w_{1}),1) =
B(w_{2},B(w),1) =
B(w_{2},w,1)
= B(w_{2},1536,1) ==
B(.w_{2}.,1,1)..
{#1536#} =
w_{2}*2^1536,
w_{2} = B(w_{1},1,2) =:
B(w_{1},B(w_{1}),1) =
B(w_{1},w,1) =
w_{1}*2^1536,
w_{1} = B(w,1,2) =:
B(w,B(w),1) =
B(w,w,1) =
1536*2^1536,
B(3,2,3) =
1536*(2^1536)^3 =
1536*2^4608 =
3*2^(3^2*(2^3^2+1))
Prelude> 1536*2^4608 =
21508555048174302465876949175324489276722303317367625962613911393023535708309001
78680983067662471539315804775104840474528205150793642557204649350532414481362407
98199460785373402205046566762413511868871486946075012367733934401906436482634209
59419069734176249215449135598234647461283588582007188817294549851994894346936544
65285376410883677697566682011933691337122602607789253333664335939653210088009086
99770469576158631315246172393814114168729552291134778046293682760898673771130164
23519319173790600630511245345170358550201216597797145343678870759570666042935838
73996122460887992355390472801144634846876389929620929491842384582172483641434952
95188793678333958000373946494030710435917675778261498598834095731097965721642497
52594115967964661223700523122229635070400058206614546673319630344229139366308243
81426551272291648373590148626915038305230061956894736445177133599840169934060686
86541722487468972954615413466637605741404272180714146589918068237526297031550033
73427622995380922980483996219588881729412826516433562381873184009838669561476556
84391421837876739456716595598295903592305799885521657744249913875158756304330407
10054312458814812162316829404987960391540162046020325373999458283755441089542197
42529876945845356666207899983789154092325303066851863276617976497269531860721327
95718743734826255856889194790563371505928800555981765807151134341884599237347359
1937166512672488746198778249216.
A more precise mathematician may prove tryB's set of rules with waitors
cannot be derived by primitive recursion,
as the existence of the
Grzegorczyk
hierarchy suggests
(else his concept of hierarchic function classes is in reality less strict).
To us the above steps in the definition of
tryB
look quite primitive
,
but to prove that they are (as we conjecture) is not crucial for our discussion.
The striking feature of tryB is that it distinguishes
between even and odd values of c
in order to subdivide Grzegorczyk's enumeration of superpower functions in two.
Follows our proof that tryB
increases at half the speed of superpowers.
We take all reduction steps from the outside in –
with no resort to the unofficial
6th rule
for waitors – but with many short cuts.
Expressions of tryB are approximately translated to arrows
by means of
backslashes \
.
To show the whole construction of our proof by example,
click here.

B(v,b,1) =
B(B(v,1,1),b,1) ==
B(.v.,1,1)..
{#b#}
= B(v,b,1)*2 == B(v,1,1)*2^b = v*2^b 
B(v,b,c) =
B(B(v,1,c),b,c) ==
B(.v.,1,c)..
{#b#}
= B(B(v,b,c),1,c)) = B(B(v,b,c),B(v),c)  B(a,1,2) = B(a,a,1) = a*2^a

B(a,b,2) =:
B(B(a,b,2),a,1) =
B(a,b,2)*2^a
== B(a,1,2)*(2^a)^b = a*2^(a*b)  B(a,1,3) = B(a,a,2) = a*2^a^2

B(a,b,3) =
B(w,1,3) {w:
B[a,b,3]
} =: B(w,a,2)
= B(B(w,a,2),1,2) =: B(B(w,a,2),w,1)
= B(w,a,2)*2^w == w*2^(w*a)
~ 2^(a*B(a,b,3)) ~ 2^..B(a,1,3) {2^#b}
~ 2^^b\^(a^2*2^a^2) ~ 2^^b\^a^2  B(a,1,4) = B(a,a,3) ~ 2^^a\^a^2

B(a,b,4) =:
B(w_{4},a,3)
{w_{4}:
B[a,b,4]
}
=: B(w_{3},w_{4},2) {w_{3}:B[w_{4},a,3]
}
= w_{3}*2^(w_{3}*w_{4}) ~ 2^B(w_{4},a,3)
== 2^^a\^B(w_{4},1,3) =: 2^^a\^B(w_{4},w_{4},2)
= 2^^a\^w_{4}*2^w_{4}^2 ~ 2^^a\^B(a,b,4)^2
== 2^^a\^(..B(a,1,4).)^2 {#b#}
~ 2^^(a*b)\^(2^^a\^a^2) ~ 2^^(a*b+2) 
B(a,1,c1) =
B(a,a,c)
{c%2=0} ~ 2^^{..}(a^2+2) {^#c/2}.
{c%2=1} ~ 2^^{..}a\^^{..}a^2 {^#c1/2 ^#c/2}. 
B(a,2,c1) =
B(w_{0},a,c)
{w_{0}:
B[a,1,c1]
}
= B(w_{1},w_{0},c) {w_{1}:B[w_{0},a,c]
}
{c%2=1} ~ 2^^{..}(w_{1}*w_{0}) ~ 2^^{..}w_{1} {^#c/2}
== 2^^{..}a\^^{..}B(w_{0},1,c) {^#c1/2 ^#c/2}
~ 2^^{..}a\^^{..}(2^^{..}w_{0}) {^#c1/2 ^#c/2 ^#c/2}
~ 2^^{..}a\^^{..}w_{0} {^#c1/2 ^#c/2}
~ 2^^{..}(a*2)\^^{..}a {^#c1/2 ^#c/2}.
{c%2=0} ~ 2^^{..}w_{0}\^^{..}w_{1} {^#c/2 ^#c/2}
== 2^^{..}w_{0}\^^{..}(..B(w_{0},1,c).) {^#c/2 ^#c/2 #a#}
~ 2^^{..}(w_{0}*a)\^^{..}2^^{..}w_{0} {^#c/2 ^#c/2 ^#c/2}
~ 2^^{..}(w_{0}*a) {^#c/2} ~ 2^^{..}w_{0} {^#c/2}
~ 2^^{..}2^^{..}(a^2) {^#c/2 ^#c/2}. 
B(a,b,c1) =
B(w,a,c)
{w:
B[a,b,c1]
}
= B(B(w,a,c),w,c)
{c%2=1} ~ 2^^{..}a\^^{..}B(w,1,c) {^#c1/2 ^#c/2}
~ 2^^{..}a\^^{..}B(a,b,c1) {^#c1/2 ^#c/2}
== 2^^{..}a\^^{..}(..B(a,1,c1).) {^#c1/2 ^#c/2 #b#}
~ 2^^{..}(a*b)\^^{..}2^^{..}a\^^{..}a^2
^{ } {^#c1/2 ^#c/2 ^#c/2 ^#c/2}
~ 2^^{..}(a*b+2) {^#c1/2}.
{c%2=0} ~ 2^^{..}w\^^{..}(..B(w,1,c).) {^#c/2 ^#c/2 #a#}
~ 2^^{..}B(a,b,c1) {^#c/2}
== 2^^{..}(..B(a,1,c1).) {^#c/2 #b#}
~ 2^^{..}b\^^{..}2^^{..}(a^2+2) {^#c2/2 ^#c/2 ^#c/2}
~ 2^^{..}b\^^{..}(a^2) {^#c2/2 ^#c/2}.
The numbers expressed by bigA and tryB always lie near
superpower multiples of two. Factor a
is dwarfed by the repetition of items 2
produced by double iteration over
b
and c
as the numbers get bigger.
Therefore tryA and tryB are said to express a
binaryexponential order.
The difference is that tryB moves at half the speed of tryA,
so its resolution (the number range covered by 3 parameters in tryB)
is twice as good as superpowers.
Under 2011 Construction
# 4.3. Superfactorial bigU
§4.3.1. Extending the factorial
John Wallis
Oxford 1701
The factorial operation n! = 1×2×3×..×n
hasn't received a proper update since John Wallis published
Arithmetica Infinitorum (1656) and his Algebra (1685).
We propose a new definition for an enumerable sign !_{X}
as the operator of the upcoming operatorial bigU.
Because the general function is so fast, we've nicknamed it the fastorial
.
You can forget about the slow
superfactorials
Sloan & Plouffe and
Pickover
defined in 1995, those are both exponential hybrids.
For Big number lovers the factorial nature of the new
superfactorial a!_{b}
operation –
where b
sits on the fastorial front row –
is paramount!
Our superfactorial operators !_{b}
are always subscripted with a coefficient,
starting with !_{1}
(the good old a! factorial)
defined by i*...
{i#a} using an
incrementor
i
.
Two factorials in a row a!_{1}!_{1}
= (a!_{1})!_{1}
will be evaluated from left to right. Likewise a series of
mixable (not necessarily the same) superfactorial operators.
All fastorials !_{X}
are left associative.
You can compare this to the coefficient used in the short cut
formula
for majority arrows,
for which mixing unequally subscripted operators was deemed
insignificant.
The superfactorial !_{b}
is a unary operator.
Unary operations enjoy a higher
precedence
than binary operations, so for example
m*n!_{1}
= n*(n!_{1})
evaluates the factorial with priority.
The b
in a!_{b}
functions as a binary operand on a
,
but because we allow the subscript to contain more than a row of iterators,
we've practically done away with the notion of nary operators.
The operatorial function bigU comes instead.
Keep in mind that a number variable a
is a series 1... {1#a}
of characters one.
Then a single 1
counted down to 0
will be a series of zero ones, nothing but an empty space.
Applying the generic operatorial axioms to bigU below,
the unsubscripted !_{0} = !
would equal 0
if it was allowed.
The third general axiom will be applied to counting down
U(1,b)
and is unique for bigU.
 U(a) = a (bracket drop at a number)

U(A,) = U(A)
(right comma drop at z=0)
so U(a,0) = a!_{0} = a 
U(,Z) = U(Z)
(left comma drop at a=0)
so U(0,b) = 0!_{b} = b
Follows the definition of the basic two superfactorial parameters of
bigU.
The evaluation of numbers with
zero space {0_{*}}
in between follows the star convention,
so in a
= a1 and
a1!_{1}
= (a+1)!
an immediate form of addition takes place
(variables and units combine with precedence).
Work out a few cases with the new superfactorial !_{2}
click here!
 U(1,1) = U(U(0,1),0) = U(U(1)) = 1

U(a,1) =
U(1...) {1#a 1:
U(a,1)
} = U(U(a,1)*a)
== 1*i... {*i#a} = a!_{1} = a! (factorial)

U(2,2) =
U(U(2,1)*2,1) = 4!
U(3,2) = U(24*3,1) = 72! ~ 6.12345*10^103
U(4,2) ~ U(6E103*4,1) ~ 2E104!
U(5,2) ~ U(2E104!*5,1) ~ 2E104!! 
U(a,2) =
U(v*a,1) {v:
U(a,2)
} == (.2.*i)!_{1}.. {#a#}
= a!_{2} ~ 2E104!^{..} {!#a3 a>3} 
U(2,3) =
U(3!_{2}*2,2) ~
1E104!_{2}
~ 2E104!^{..} {!#1E104}
U(3,3) = 3!_{3} ~ 1E104!_{2}!_{2}
U(a1,3) ~ u!_{2}^{..} {!_{2}#a} & u = ((4!_{1}*3)!_{1}*2) ~ 10^104 <~ 5!! 
U(2,4) =
U(4!_{3}*2,3) ~
4!_{3}!_{3} ~
u!_{2}!_{2}!_{2}!_{3}
U(a1,4) ~ 4!_{3}^{..} {!_{3}#a1} ~ u!_{2}!_{2}!_{2}!_{3}^{..} {!_{3}#a} 
U(2,5) ~
5!_{4}!_{4} ~
5!_{1}!_{1}!_{2}!_{2}!_{2}!_{3}!_{3}!_{3}!_{3}!_{4}
U(a1,5) ~ 5!_{1}^{..}!_{2}^{..}!_{3}^{..}!_{4}^{..} {!_{1}#2 !_{2}#3 !_{3}#4 !_{4}#a}
 U(1,b1) = U(b1,b) = b1!_{b}
 U(2,b1) = U(U(b1,b)*2,b) ~ b1!_{b}!_{b} {b≥2}

U(a1,b1) =
U(v*a1,b) {v:
U(a,b1)
}
== (.b1.*i)!_{b}.. {#a1#} = a1!_{b1} (superfactorial)
_{ } ~ b1!_{b;}^{..} {!_{b;}#a1}
~ 5!_{i}^{..}.:.!_{b;}^{..} {!_{i}#i1 !_{i}^{..}#b !_{b;}#a}
So you create a maximal predecessor subexpression v
by counting down the first entry a
in an array of U
.
Then retake the original expression U
and count down the leftmost available iterator (value not 1
unless it's the final entry) right of a
(here that's b
which can be dropped) to form outer expression.
And substitute all units in the parameter(s) left of the outer iterator
(here each 1
of a
) for inner expressions v
.
§4.3.2. Speed of the superfactorial
The principle of
bigU
is to consequently replace every increasable construct
by the maximal predecessor v
.
Just like a length is increased when we substitute its units of length,
a number variable is seen not as a whole number,
but as a series of units 1
(separated as it were by 0
commas),
which isn't increased directly but through its units.
This depends on the function of the single separator ,
for when the outer iterator lies to the right of a double comma ,,
the number of parameters is increased too.
In general only the rightmost increasable construct is significant,
which when substituted for v
delivers a function
increasing approximately just as fast as bigU.
Simply replacing the whole variable a
as in
Rózsa
Péter's recursion
has comparable strength.
(though she misses the factorial nature on our planet)~:
However, the problem lies in recognizing constructs,
we only (try to think like a Vegan:~) expand units 1
and ,
in expressions of bigU.
Two reasons why bigU's basic pair of superfactorial parameters is the perfect
Hînayâna motorcycle.
The original ideas are – substitution of the smallest dimensions
(the units 1
) and multiple alternating recursions.

Instead of substituting parameters, bigU replaces the
lesser dimensions
1
of its free parameter values. Here this amounts merely to multiplication, but it immediately kickstarts the engine. 
Fundamentally bigU is powered by alternating recursive rules or wheels.
Every front wheel
U(a,b1)
depends on an existing rear wheelU(1,b1)
which in turn relies on the front wheelU(a,b)
for recursion.
Rózsa Péter calls this double recursion and proves it increases as fast as a3wheel
superpower function.
Because the algorithm is so intertwined,
you'll literally have no idea what you're looking at when you see a number
as a function of U()
and
this is a good thing, for all operatorials are heinously fast.
When creatures with logarithmic senses feel comfortable about Big numbers,
the golden brown Buddhas of Oblivion only if ever
frown :–
The algorithm for bigU is slightly faster than
having a!_{b1}
count factorial signs
a!_{b}..
by {!_{b}#a}
But in the end it converges to this double recursion.
And with this bigU achieves superpower speed
in two parameters instead of the usual three.
a!_{2} ~ a^^a/a^^(a1) <~ E(a,2,3)
a!_{b} ~ a^^{..}a {^#b} or E(a,2,b1)
And so the question, whether
the superfactorial !_{b}
grows with step 1
on the superpower
speedindicator
^^{..} {^#c}
of bigE,
succumbing to the same c++
beat as our other recursive operatorials towards the end, is vindicated with a big
affirmative.
# 4.4. Operation Babelfish
In the Northern Ocean there is a fish called Kun which is many thousand li in size.
It changes into a bird named Peng whose back is many thousand li in breadth.
When it rises and flies, its wings are like clouds filling the sky.
– Chuang Tsu 1.
§4.4.1. Péter's recursion
Inspired by
Ackermann's proof
that primitive recursion is not the final word,
Rózsa Péter continued work on the classification of recursive functions.
Ackermann himself used a foundation of 3 parameters –
essentially an inflated superpower function
bigI –
but from her hand is a basic 2parameter algorithm, which she built to put an
Ackermann φ
penthouse on top (it's sometimes presented as the
Ackermann recursion, though that's not historical),
and which employs the same alternating double recursion as superfactorial
bigU.
In this subchapter we'll try to generalize Péter's Ackermann formula
by putting it in bigI format.
It feels like Rózsa would have sought such a translation herself,
but it's not in
her book.
The first thing we'll do is to reverse the direction of parameters in the functions which Péter and other recursion theorists like to use. For an intuitive understanding of the strength of parameters, we'll append every higher iterator to the right end, not to the start of the array. The last parameters create the Bigger numbers.
Pé(0,b1) = Pé(1,b)
Pé(a1,b1) = Pé(Pé(a,b1),b)
Here Péter's algorithm has the same recursive substitute
Pé(a,b1)
as we saw in the basic pair of
bigU.
But in bigU every parameter unit 1
in a
is to be substituted (i.e. the parameter is multiplied a*v
),
and Péter replaces her primary parameter in one go.
Therefore Péter must at least add n=1
to the single parameter a
.
Else if she had put Pé(a,0) = a
then every reduction Pé(a,b) {a>0}
would have the same result m=1.
§4.4.2. Prefix Pe a parameter
Our operatorials do not change single parameters.
The principle is that a single variable has nothing to relate to.
We'll generalize Péter's formula so that any value n
can be added to its initial case.
This creates a recursive function in bigO style.
Because variable n
has a minor effect on the result
it is positioned left.
Pe(n,1,b1) = Pe(n,Pe(n,0,b1),b)
= Pe(n,v,b) {v:
Pe(n,1,b)
}Pe(n,a1,b1) = Pe(n,v,b) {v:
Pe(n,a,b1)
}
In expression Pé(1,b1)
Péter's algorithm
substituted v = Pé(1,b)
where in
bigU
v = U(0,b)
reduces to the smaller value b
(the difference is not so significant – it doesn't happen often).
We could generalize the 2nd item in the above formula to contain
all cases v = Pe(n,m,b)
by prefixing another coefficient m
to Pe
's parameter array.
Still m
would remain constant, while the substitute expression in
bigU varies during the course of the reduction.
Moreover, for a clean result m
is dependent on the value of n
as shown
below.
First we rename the variables.
The n
of Pe
should get the name a
–
note that a
is not counted down.
Formula Pa
is the operatorial form of Péter's recursion.
 Pa(a,b) = Pa(ab) = ab
 Pa(1,1,c1) = Pa(1,Pa(1,1,c),c)

Pa(a,1,c1) {a>1} =
Pa(a,Pa(a,m,c),c)
{m:
Ma(a,c)=ak
}  Pa(a,b1,c1) = Pa(a,Pa(a,b,c1),c)
The 3d item in this definition list is newly introduced
and provisional.
In the reduction of expressions Pa
where
a>1 and b=1
the nested inner iterator m
is replaced by ak_{c}
where k_{c}<a and so 0<m<a
simplifies the expression.
We still have to find out about m
and leave it open for the time being.
It's time to listen to the fish…
§4.4.3. Pa gone fishing
In the following 3parameter translation Rózsa Péter's function behaves perfectly primitive recursive, as it finds expression in the superpowers of bigI.
Pa(1,b,1) == Pa(1,..1.) {#b1#} = b+2 = I(2,b3)
Pa(1,b,2) == Pa(1,..1.,1) {#b1#} = b×2+3 = I(2,b3,1)
Pa(1,b,3) == Pa(1,..1.,2) {#b1#} = 2^(b+3)3 = I(2,b3,2)
Pa(1,b,c1) == Pa(1,..1.,c) {#b1#} = I(2,b3,c)
We
wrote
the first parameter as a=m+k because
it's not obvious what a good value for k
is.
(It says somewhere in our lost calculations that if a>2
then k=2 …and all will be well?!)
Keep things open with a provisional inner substitute
m_{c}:=Ma(a,c)
to be determined by variable c
.
Pa(a,1,1) = Pa(a,Pa(a,m_{1;})) = aam_{1;}
Pa(a,b,1) = Pa(a,Pa(a,b,1)) ==
Pa(a,..m_{1;}.)
{#b1#}
= (a*b1)m_{1;} =
I(a,bx,1)(a×(x1)m_{1;})
Pa(a,1,2) = Pa(a,Pa(a,m_{2;},1),1) = (a*(a*m_{2;}1)m_{1;}1)m_{1;}
Pa(a,b,2) = Pa(a,Pa(a,b,2),1) ==
Pa(a,..m_{2;}.,1)
{#b1#}
= (a*..m_{2;}.1)m_{1;}
{#b1#}
= (m_{2;}+1)×a^(b+1) +
(m_{1;}+1)×(a^i+...+1)1
{a^i#b}
It looks like we're at a loss, but it's not too bad. To continue, consider case a=2 and put m_{c}=1 as before.
= 2^(b+3)3 = I(2,b3,2)
Pa(2,1,3) = Pa(2,Pa(2,m_{3;},2),2) {m_{3;}:1} = 2^2^43
Pa(2,b,3) = 2^^(b+3)3 = I(2,b3,3)
Pa(2,b,c) = 2^^{..}(b+3)3 {^#c} = I(2,b3,c)
§4.4.4. The catchall
Fill in m_{c}=a2 to translate expressions of Pa
where a>2 to superpower format.
Pa(a,b,2) {a>2} = (m_{2;}+2)×a^(b+1)2 {m_{2;}:a} = (a**b2)
Pa(a,1,3) {a>2} = Pa(a,Pa(a,m_{3;},2),2) = a^(a^(m_{3;}+2))2
Pa(a,b,3) {a>2} == Pa(a,..m_{3;}.,2) {#b1# m_{3;}:a}
= a^^(b+2)2 = (a***b2)
Pa(a,b,c) {a>2 m_{c;}:a} = (a*^{..}b2) {*#c} = I(a,b2,c)
This works for all parameters c
,
which means we can completely translate our extension of Péter's recursion
to a primitive recursive definition in bigI or bigE.
For all n>0 in Péter's initial successor
formula Pé(a)=a+n
the recursion keeps increasing as fast as superpowers and therefore stays below
AckermannConway level.
Rewrite the basic definition of our operatorial version of Péter's recursion.
 Pa(X,0) = Pa(X)
 Pa(a) = a
 Pa(a,b) = Pa(ab) = ab
 Pa(a,1,c1) {a=1  a=2} = Pa(1,Pa(1,1,c),c)
 Pa(a11,1,c1) {a>0} = Pa(a,Pa(a,a,c),c)

Pa(a,b,c) = Pa(a,Pa(a,b,c),c)
= I(2,b3,c) {a=1}
= I(2,b3,c) {a=2}
= I(a,b2,c) {a>2}
Under 2011 Construction
Ψ.5. First row of parameters
§5.0.1. Crafting by grafting
And the Cyclopes then gave Zeus thunder and lightning and a thunderbolt,
and on Pluto they bestowed a helmet and on Poseidon a trident.
Armed with these weapons the gods overcame the Titans, shut them up in Tartarus,
and appointed the Hundredhanders their guards.
But they themselves cast lots for the sovereignty,
and to Zeus was allotted the dominion of the sky,
to Poseidon the dominion of the sea, and to Pluto the dominion in Hades...
– Apollodorus, The Library
To expand the superpower
operatorials
defined in the previous chapter to a row of parameters
– or generally to move from one class level of functions to the next –
there are two approaches.
You can either keep the basic rules and supplement them with a different set of
axioms
for the rest of the row – this we call grafting –
or you can construct the whole row or higher level by extending
the earlier axioms, so that for example the
rule
for bigE's parameter c
is now included in bigE's
row formula
for the c,..,z
range of parameters.
In this chapter the
basic
parameter definition of operatorials will be expanded in a handful of illustrative ways,
where it's often a matter of taste which rules apply naturally.
With E(a,b,c)
= I(a,b,c1)
bigE and
bigI both covered
superpowers
in the first three parameters.
The rest of bigE's
front row
will be modelled after unmixed
majority arrows,
while bigI implements mixed
minority stars
which cover a whole row of arrow subscripts in a single parameter d
with the help of
waitors.
A function that resembles grafting is Conway's chained arrow notation, which expands superpowers to a row of iterators of arbitrary length. Chained arrows are very much the consequence of the basic 3 parameters, but Conway designed them carefully to stand alone.
When Van Novaloka first became
fascinated
with the subject of Big numbers,
totally oblivious, she used grafting to scratch at the
Ackermann numbers
(her focus was less on functions).
The invention
of the so called
Anchorman, Barman, Caveman, Doorman, etc. numbers M
corresponded directly to the value and position of final parameters
in the following recursive function O(R,z)
.
Here's a review of the formulas for the first row in Van Novaloka's
colourful essay.
M(p,1,r) = O(p,...) {p#r}
M(p,q1,r) = O(v,...) {v#r v:
M(p,q,r)
}O(p_{i;},...,q) {p_{i;}#r r>2} = O(v,...) {v#r v:
M(p_{i;},q,r)
}
The practice to put several functions in a set of equations is confusing, we agree,
but perhaps you've noticed the nested substitution of numbers
M
for parameters in O
.
Without recourse to a helper function, a similar set of equations
can be expressed by a single operatorial Om
and some nifty
metastatements.
In the example rules for functions with r=4 parameters you learn how
Om
is nested to depth d
before the multitude of 3^d
deepest subexpressions drop their final parameter to become superpowers.

Om(a,b,c) = I(a,b,c)
[
Om
shifts addition back toc=0
]  Om(a,b,c,1) = Om(Om(a,a,a),Om(b,b,b),Om(c,c,c))
 Om(a,b,c,d1) = Om(Om(a,a,a,d),Om(b,b,b,d),Om(c,c,c,d))

Om(a_{i},..,z1) {a_{i}#r r>2} =
Om(Om(a_{i},..,z),.:.,h) {a_{i}#r Om(a_{i},..,z)#r h=0}
The last rule is awesome,
as every free parameter is replaced by an iteration of Ackermann numbers.
Note that the
4dot illipsis
sign .:.
owns
and increments the i
for every subexpression Om(a_{i},..,z)
here.
Obviously outer iterator h
need not be 0
(and dropped).
It can be given any value as long as the reducibility of the expression is preserved
– any outer iterator 0<h<z1
could turn Om'
into a faster function.
But, because we'd like to avoid a choicetype of functional subexpression,
an original operatorial with an iterator z1
evaluates naturally to an outer iterator of either h=0
or h=z in the reduced expression.
§5.0.2. General recursive baby steps
Later at Van Novaloka's
old weblog
in another function O'
grafted on superpowers
she took predecessor h=z as the outer iterator, thereby increasing
the size of the numbers expressed by the same parameter values.
Here we'll call that particular function On
and compare it with Om'
where h=z
(which we call Omz
) to see if these functions result
(on average) in equally large numbers, given the same
superpower stock
a,b,c
.
 On(a,b,c,1) = On(On(a,b,c),On(a,b,c),On(a,b,c))
 On(a,b,c,d1) = On(On(a,b,c,d),On(a,b,c,d),On(a,b,c,d),d)

On(a,b,c,d,e1) =
On(v,v,v,v,e) {v:
On(a,b,c,d,e)
} 
On(a_{i},..,z1) {a_{i}#r r>2} =
On(On(a_{i},.:.,z),..,h) {a_{i}#r On(a_{i},..,z)#r h=z}
Adapt the main rule of Om
to create a function
Omz = Om' {h=z}
with the same maximal iterator h
as in On
.
Then to gauge Omz
versus On
we have to compare the final subexpression
Omz(a_{r;},..,z) {a_{r;}#r}
with On(a_{i},..,z) {a_{i}#r}
because these have decisive bigO strength.
In case of equal parameters a,..
both functions do not differ, but otherwise
the last substituted expression in On
has the value x=a_{r;}
in its penultimate parameter,
while the same substitute in Omz
holds a sequence of undiluted y=a_{r;}
so that the difference between Omz
and On
boils down to the relative size of the rightmost unequal two parameters
a_{k}≠a_{k1}
–
in the example below the x
and y
to start with differ.
Therefore both these functions have an equal chance of producing
roughly equally large numbers.
Omz(a,..,z) {a#r} = On(a,..,z) {a#r}
x<y <=>
On(R,x,y,z) < Omz(R,x,y,z)
& x>y <=>
On(R,x,y,z) > Omz(R,x,y,z)
The same reasoning applies to the simplified function Ono = On'
{h=0}
versus function Om
.
If x<y then Ono < Om
so that numbers expressed by Om
and Ono
have roughly the same size.
Nevertheless, some recursive functions are significantly slower
than function Om
in the first row.
As an example we'll graft Ob {h=0}
on superpowers,
defining Ob
so it stops growing much further.
 Ob(a,b,c) = a*^{..}b {*#c}
 Ob(a,b,R,z1) = Ob(a,Ob(a,b,R,z),R)
 Ob(a,b,c,1) = a*^{..}a*^{..}b {*#c *#c} ~ b*^{..}3 {*#c1 a~b}
 Ob(a,b,c,d) ~ b*^{..}d2 {*#c1 a~b}
 Ob(a,b,c,d,1) ~ b*^{..}d2*^{..}d2 {*#c1 *#c1 a~b}
 Ob(a,b,c,d,e) ~ d2*^{..}e2 {*#c2 a~b~d2}

Ob(a,b,c,t_{i;},..z) {t_{i;},#r} =
Ob(a,..b.,c) {#u#} where u = t_{i;}1*..z1 {t_{i;}1*#r}
~ t_{r;}2*^{..}z2 {*#cr1 a~b~t_{i;}2 i<r}
You can see that row length in Ob
merely adds
to the number of stars *..
counted by c
.
Such functions Ob_{r}
still belong to the
superpower class of functions, below the first
Ackermann jump.
Subchapter 5.5
generalizes the reason why Ob
is so slow –
higher parameters than b
are never iterated over.
# A walk on the AckermannConway level
Any function rule which limits the substitution position to some
fixed receptor, even though higher than Ob
's b
,
must be structurally slower than Om
, because
Om
keeps expanding its row of receptors to the right.
We count these limit positions as our first
superpowerlike steps {K.2.0.c}
past the Ackermann jump,
and each step takes us further from the
basic
primitive recursive level {K.1}
of functions.
Functions {Om,Ono,Og}
belong to the same class as
{Omz,On}
which covers the first general recursive
subclass {K.2.0}
in the first row,
waiting for the second Ackermann jump.
Note that the superpower stock Om
and On
were grafted on, does not play a major role in smoothing out the differences between them.
Pivotal is that both original definitions (before grafting)
already require a full row to fill the
Grzegorczyk hierarchy.
The next section and later Og0
confirms
this explanation.
Conway's
chained arrows
express every step {K.2.0.c}
of the row of Om
with their 4th parameter, and so they make the 2nd Ackermann jump by the
4th chained arrow
(and next parameter).
By
recursive grafting
on Og
we shall prove,
that the full row of chained arrows cover the enumeration
of Ackermann jumps and sublevels {K.2.d}
totally.
§5.0.3. Multiple substitution is futile
Rósa Politzer
Budapest ±1923
In this section we'll show the meagre consequences of multiple substitution
by means of a function Nn
, which looks fast in the beginning,
but actually requires a whole parameter row to cover only the Grzegorczyk
subclasses.
It was Rózsa Péter
(read her
Recursive Functions)
who first gave a rigorous proof that unnested multiple recursion
does not lead out of the class of primitive recursive functions.
Example function Nn
applies the
rules
of On
from the start – after Nn(a,1)
actually. Because Nn
is incapable of making the Ackermann jump
in its first row, this proves that a whole row of parameters in
On
cannot take us to a higher subclass,
and that On
must stay firmly within the class
{K.2.0}
of Ackermann functions.
This despite the fact that Om
was grafted on Ackermann numbers
and that On
increases faster than Om
(as shown
above).
Remember that the
Ackermann function
Ack_ψ
belongs to a higher class than the
superpowers
of Ack_φ
, because it jumped from primitive recursion.
By contrast, Nn
is incapable of transcending
primitive recursion
in the first row.
This example function is pretty straightforward –
students can easily check that Nn
maintains reducibility.
The weak point of Nn
is its induction step,
which is not so efficient because the final parameter z
is reduced twofold in each step – counting down z
in both the outer and the inner expression.
The generous substitution of free parameters
in the outer expression of Nn
seems to compensate for this,
but still multiple substitution remains comparatively slow.

Nn(a,1) = Nn(1..) {1#a 1:a}
= Nn(a..) {a#a} = a*a 
Nn(a,b1) = Nn(v,b) {v:
Nn(a,b)
} = a**11**11**b ~ 2^^3\^(b+1) 
Nn(a,b,1) = Nn(v,v) {v:
Nn(a,b)
} ~ 2^^6\^b 
Nn(a,b,c1) = Nn(v,v,c) {v:
Nn(a,b,c)
} ~ 2^^(3×(c+2))\^b 
Nn(R,y,z1) = Nn(v,..,z) {v#r1 v:
Nn(R,y,z)
} where R = t_{i;},.. {t_{i;}#r}
~ 2^^{..}(2^(z+1))\^^{..}(2^y)† {^#r1 ^#r} ~ 11*^{..}(11**z1)111† {*#r11}
In the formula for Nn
the
generic rules
of bigO apply and
addition
happens naturally when variables are adjoined. The variable v
that substitutes for every free parameter is determined by the
metastatements
that follow.
We use
backslashed
\
operators to enhance notation precision and a
dagger
† to cut the calculation short.
By the 5th parameter Nn
progresses practically on a par with the
superpower *****
of bigI.
According to our last
calculation,
at the end of its first row Nn
will converge close between
two superpower limits.
The point to pay attention to is that all parameters
in Nn
's first row stay well inside
superpower
limits, so that Nn
will only reach
Ackermann functionality
past the first dimension.
While Nn
is groping for emancipation at the right edge of its parameter row,
Conway's
chained arrows
as well as bigA, bigE and bigI
become Ackermann functions in their 4th parameter, and
bigU
already in its 3d.
The main difference between Om
and On
lies, as shown
above,
in the value h
of the outer iterator.
But this doesn't even count as a decisive step in our
classification
system.
Suppose we'd remove Om
from its superpower stock
to define a function Mm
.
Let Mm(a,b) = a*b
and
Mm(a,b,c1) =
Mm(Mm(a,a,c),Mm(b,b,c))
then the first 3 parameters of Mm
remain in Kalmár's
elementary
class of functions, well below the power towers pointing to tetration.
Define the rest of Mm(R,z)
like we defined Om
before
and Mm
will take the whole row to match superpowers.
Mm(a,b,c) = a^2^c*b^2^c
Mm(a,b,c,1) ~ ab^2^c^2^c1
Mm(a,b,c,d) ~ ab^2^c^2^..c1 {#d1} ~ abc***4*d1
Mm(a,b,c,d,e) = abc***..4*d1 {#e1} ~ abcd****e2
Mm(t_{i;},..z) {t_{i;},#n} ~ t*^{..}z2 {*#n}
The notation with an algorithmic mean
t
or ab..
indicates, that when parameters t_{i;}..
are common nonextreme integers these are less relevant to the result.
The first row of Mm
increases with superpower speed,
like Nn
above (albeit one count of c
slower),
and evolves on a par with the simpler urfunction Og0
of Graham's Og
below.
In fact Mm
reduces to the same approximate expression as
Ôg0
towards the row limit –
a clear example that multiple substitution
cannot help you in the end.
Generally multiple substitution over a range of parameters does not create significantly larger numbers than single substitution applied to its highest parameter. Because multiple substitution is a tedious, unnatural and ineffective procedure, we'd best choose another policy to optimize operatorials.
# 5.1. Graham's Og
§5.1.1. Graham's number
Ron Graham
Juggler of Og
In the next section we'll create an operatorial function Ôg
that enables us to give a simple and exact expression
of the famous number of Ronald Graham.
What we call Graham's Og is the simpler brother of Ono
,
which uses multiple substitution and is derived
above from On
.
All Ogtype functions only substitute the penultimate parameter,
which is simpler and not significantly less fast.
On top of that, Graham's Ôg rises up a little earlier with
the power a^b
at the count of c=1
(hence the circumflex on the Ô).
Now about that number. Weinstein's
Encyclopedia
tells us that Graham's number is the smallest dimension of a
hypercube
such that, if the edges (lines)
joining all the pairs of vertices (corners) are twocoloured (either
red or blue),
a planar
complete graph K_{4}
of one colour (on its 6 edges) will be forced.
So multidimensional cubes can be kept free from monochromatic
complete squares in every dimension below a certain maximum,
for which the estimate is Graham's number.
It is not a very good estimate and neither is it the upper bound Graham
published in his 1971 article with Rothschild.
What we know as Graham's number was first described by Martin Gardner
in an article in Scientific American in 1977.
(but Ron is pleased to have a number named after him, he says he recommends it ;)
David Wells
lists Graham's number in his popular
Dictionary of curious and interesting numbers as the final entry.
The following quotation explains its
cult
status.
The World Champion largest number, listed in the Guinness Book of Records (1980 ed.), is an upper bound, derived by R.L. Graham from a problem in a part of combinatorics called Ramsey theory.
Graham's number cannot be expressed using the conventional notation of powers.
Consider...The largish number
v_{1;} = 3^^{...}3 {^#v}
withv = 3^^^^3
arrows.
Next construct the incredible, ungraspable numberv_{2;} = 3^^{...}3 {^#v_{1;}}
Now continue this processv_{d1;} = 3^^{...}3 {^#v_{d;}}
until step d+1 = 63
That is Graham's number.Postscript
Since Graham and Rothschild wrote their "Ramsey's Theorem for nParameter Sets" in 1971, Graham's upper bound didn't change. The lower bound for a solution – which experts think is a small number – increased from 6 to 11 (Exoo 2003) and recently to 13 (Jerome Barkley 2008).
Note that the Dutch mathematician Van der Waerden, working in this field, proved a theorem in 1927 on arithmetical progressions of colours, for which he defined bounds that grow like an Ackermanntype function. So we can't be absolutely sure if Graham came up with the alltime record bound in Ramsey theory. But because Graham's number involves a statement about squares inside hypercubes we believe it will always remain the most elegant.
§5.1.2. How to express a world record
In the same fashion as David Wells does in the
quote above,
we'll devise an operatorial function Ôg
that expresses Graham's number exactly as
Ôg(3,3,4,63)
We'll
graft
Ôg
on superpowers – this is the
dominant
rule for all functions Og
.
 Ôg(a,b,c) = I(a,b,c1) = a^^{...}b {^#c}
 Og(a,b,c,1) = Og(a,b,Og(a,b,c))
 Og(a,b,c,d1) = Og(a,b,Og(a,b,c,d))
 Og(R,y,z1) = Og(R,Og(R,y,z))
Our circumflexed Ô
functions start with
multiplication instead of addition of 2 parameters.
This is a matter of initialisation and of minor importance.
The main axiom above applies to Ôg
too.
Ôg
travels not nearly as fast as bigE
or our champion bigI,
and it doesn't leap to the next function subclass.
The progression of Graham's Og
is comparable to our
baby Om
.
The fact that Og
applies single substitution does not reduce function speed that much.
Because multiple substitution is
futile
both Og
and Om
belong to the same
class
of recursive functions as On
,
which hardly increases any faster towards the end of the row.
The substitute expression in Og
(with addition at c=0)
is the same as that of Ono = On' {h=0}
By analogy define a new function Of
which reduces
Of(a,b,c,d1)
= Of(a,b,Of(c,c,c,d))
with a similar substitution as
Om {h=0}
and
check
that Of
and Og
roughly keep pace.
It's trivial that Ôg
has an edge over
function On
.
Substitution of the rightmost parameter is dominant and Ôg
has a head start, so On
will perpetually lag behind Ôg
,
provided the original parameter values are not too extreme.
Despite its handicaps (single substitution, dropping the outer iterator) Og
apparently has the same expressive power as any other Ackermann class
{K.2.0}
function.
I(x,x,I(x,x,y1)1) = Ôg(x,x,Ôg(x,x,y)) = Ôg(x,x,y,1)
Og(R) < On(R) < Ôg(R)
Graham's number Ôg(3,3,2,Ôg(4,3,1))
can be approximated by chained arrows or by bigE. Here operatorial
bigE
represents typed majority arrows, and by
translation
from chained arrows the above expression in Ôg
of Graham's number rolls out.
Imagine the fathomless
space
in between such relatively close Big numbers!
E(3,65,2,1) <
Ôg(3,3,4,63)
<
E(4,65,2,1) <
2→3→65→2
§5.1.3. Og hops back
With good
reason
we found that the functions
{Om_{h},On_{h},Og,..}
reside in the same Ackermann class {K.2.0}
.
Then we take a step back to
measure
the urfunctions Nn
and Mm
,
which remain primitive recursive,
although they apply multiple substitution to a whole row of parameters.
Similarly we'll define the urfunction Ôg0
,
which applies the rules of Graham's Og
from the start,
only to find that the AckermannConway level {K.2}
is never reached.
Og0
's first row of parameters stays on the
primitive recursive
level – just before the
jump
to the class {K.2.0}
of Ackermann functions.
From this we may conclude that the rules of Graham's Og
are essentially slower than the main superpower rule,
even though functions Ogn
typically operate on a higher class of numbers.
It is the grafting that gets them there.
To get the approximation for Og0
just replace arrows ^^{..} {#c}
by stars *^{..} {#c}
 Ôg0(a,b) = a*b
 Og0(R,y,z1) = Og0(R,Og0(R,y,z))
Ôg0(a,b,c) = a^c1*b ~ a^c2 {a~b}
Ôg0(a,b,c,1) = a^(a^c*a*b)*a*b
Ôg0(a,b,c,d) = (a^..c.*a*b) {#d1#} ~ abc^^d2
Ôg0(a,b,c,d,1) ~ abc^^abc^^d2
Ôg0(a,b,c,d,e) ~ abc^^^e1\^^d2 ~ abcd^^^e2
Ôg0(a_{i},...z) {a_{i},#n1} ~ a^^{..}z2 {^#n}
These functions Og0
are clearly primitive recursive, belonging to the
Grzegorczyk
hierarchy {K.1.c}
.
It takes a lengthy row of parameters to obtain a high value for c
.
This means the urgraft Og0
is covered by the 3rd parameter c
of chained arrows.
Then we draw the conclusion that Og1
(Graham's Og
), having the same rules as Og0
past its 3 parameter stock, will behave the same,
in that it won't make another Ackermann jump and with its first row
Og1
will cover the initial AckermannConway class
{K.2.0}
completely.
We'll prepare Ôg0
for the job of initializing the sequence of
recursive grafts
on Qy
.
The following equations can be worked out as an exercise.
Qy(a,1,c) = Qy(a,Qy(a,a,c),c) = Ôg0(a,...Ôg0(a,...)) {a,#c a#c1}
Qy(a,b,c) = Qy(a,Qy(a,b,c),c) = Ôg0(a,...,b) {a#c1}
§5.1.4. Og is jumped over
What lies beyond {K.1}
class recursion?
And who makes the second jump and the next,
counting an arbitrary number of Ackermann jumps?
We've suggested
before
that Conway's
chained arrows
fill the bill. Now it's shown how the first row of Ôg
can be covered by the single parameter d
of a special successor function Qy
of chained arrows.
We'll first define the operatorial Qy
given Conway's chained arrows,
and then express the row of Graham's Ôg
in terms of Qy
.
All subrows R = t_{i},... {t_{i}#r r>1}
have at least two parameters.
 a→b→c = Qy(a,b,c) = a^^{...}b {^#c} [same base]

R→y1→1 <
Qy(R,y1,1) =
Qy(R,Qy(R,y,1))
[drops
z=0
] 
Qy(R,1,z1) =
Qy(R,Qy(R,0,z1),z) =
Qy(R,Qy(R,Qy(R,,z1),z),z)
= Qy(R,Qy(R,Qy(R),z),z) [O: nests 2 extra iterations,
compare :→] R→3→z1 = R→(R→(R→1→z1)→z)→z = R→(R→(R)→z)→z  Qy(R,y1,z1) = Qy(R,Qy(R,y,z1),z) [same main rule]
This tells us chained arrows R→x→y1→z1
are slower than
Qy(R,x,y,z)
given 4 or more parameters.
And both expressions are predecessors of R→x→y→z2
which increases notably faster (its outer iterator dominates).
 a→b→c→1 = Qy(a,b,c) = Ôg(a,b,c) = a^^{...}b {^#c}
 Qy(a,b,1,1) = Qy(a,b,Qy(a,b,a*b)) = Ôg(a,b,a*b,1)

Qy(a,b,c,1) =
Qy(a,b,Qy(a,b,c,1)) =
Qy(a,b,..a*b.)
{#c1#}
= Ôg(a,b,a*b,c) 
Qy(a,b,1,2) =
Qy(a,b,Qy(a,b,a*b,1),1)
= Ôg(a,b,a*b,Ôg(a,b,a*b,a*b)) = Ôg(a,b,a*b,a*b,1) 
Qy(a,b,2,2) =
Qy(a,b,Qy(a,b,1,2),1)
= Ôg(a,b,a*b,Ôg(a,b,a*b,a*b,1)) = Ôg(a,b,a*b,a*b,2) 
Qy(a,b,c,2) =
Qy(a,b,Qy(a,b,c,2),1)
= Ôg(a,b,a*b,a*b,c) 
Qy(a,b,1,d) =
Qy(a,b,Qy(a,b,a*b,d),d)
= Ôg(a,b,a*b,...,Ôg(a,b,a*b,...)) {a*b#d a*b#d}
= Ôg(a,b,a*b,...,1) {a*b#d}  a→b→c1→d1 < Qy(a,b,c,d) = Ôg(a,b,a*b,...,c) {a*b#d}
The first row of Graham's Ôg
creates a new superpower function
past the Ackermann jump, and Conway's chained arrows cover this whole subclass
{K.2.0}
with their 4th parameter d
.
The length of Og
's row
keeps increasing as fast as chained arrows' final parameter value,
so it's definitely the 4th chained arrow which jumps to
the initial subclass on the AckermannConway row.
§5.1.5. Recursive grafting on Og
A row of Ôg0
is equal to
3 parameters of chained arrows or Qy
,
and a row of Ôg1
takes 4 parameters of Qy
.
Suppose we'd graft a similar function Ôg2
on Qy(a,b,c,d)
instead of superpowers, then
Ôg2
expresses the next recursive class
{K.2.1}
with its first row of parameters.
But how much of the value of the 5th chained arrow parameter does that take?
And ultimately – do we need a whole row of chained arrows
to express all recursively grafted Ôgn
completely or will a limited number of chained arrows be sufficient?
Find the key to enumerate the whole level of countable Ackermann jumps
with a single coefficient.
Start with the two axioms of Ôg2
and then write out the necessary calculations like we did before.
 Ôg2(a,b,c,d) = Qy(a,b,c,d)
 Og2(R,y,z1) = Og2(R,Og2(R,y,z))

Qy(a,b,c,1,1) =
Qy(a,b,c,Qy(a,b,c,Qy(a,b,c)))
= Ôg2(a,b,c,Ôg2(a,b,c),1) 
Qy(a,b,c,d,1) =
Qy(a,b,c,Qy(a,b,c,d,1))
= Ôg2(a,b,c,Ôg2(a,b,c),d) 
Qy(a,b,c,1,2) =
Qy(a,b,c,Qy(a,b,c,Qy(a,b,c),1),1)
= Ôg2(a,b,c,Ôg2(a,b,c),Ôg2(a,b,c),1) 
Qy(a,b,c,d,2) =
Qy(a,b,c,Qy(a,b,c,d,2),1)
= Ôg2(a,b,c,Ôg2(a,b,c),Ôg2(a,b,c),d) 
Qy(a,b,c,d,e) =
Ôg2(a,b,c,v,...,d)
{v#e v:
Ôg2(a,b,c)
}
The family of functions indexed as Ôgn
,
share the main rule of Og
and are recursively grafted
on the operatorial Qy
(increasing slightly faster than Conway's chained arrows)
with parameter row length n+2.
With each next grafting n>0 function
Ôgn
jumps to a new subclass on the AckermannConway level.
Give the general formula of recursive grafting on superpowers,
which translates expressions of Ôgn
into Qy
. First the stock, then the shoot.
The first shoot is the front row of
Ôg0
superpowers.
 Ôgn(a,t_{i;},..) {t_{i;}#n1} = Qy(a,t_{i;},..) {t_{i;}#n1}

Qy(a,t_{i;},..y,z) {t_{i;},#n} =
Ôgn(a,t_{i;},..v,..y)
{t_{i;},#n v,#z}
_{ } _{ } where v = Ôgn(a,t_{i;},..) {t_{i;}#n}
So ultimately yes, the row of chained arrows covers all recursive functions
Ôgn
grafted on Ackermann jumps – both occupy the
AckermannConway class {K.2}
of functions.
That primitive recursion {K.1}
should supply the initial functions here
is not evident (addition is a likely candidate at the root).
But expressing Ôg
in Qy
we cannot graft any lower than on
Ôg0(a,b) = a*b
because the early one parameter stock
Ôg(a) = a
leads to contradiction in this system.
A row of Og = Og1
acts as a 2nd row,
appended after the superpower front row of Og0
,
which is the stock Og
is grafted on.
The following row of Og2
is like a 3d row in the square of Og0
or a 2nd row in the square of Og
.
From this follows that a row of chained arrow
parameters of length n3
is about as fast as an n
row square of Og
, and that a row of chained arrow
parameters of length ng2
generally increases as fast as an n
row square of
Ogg
.
To put it bluntly – a square array of Og
is equivalent to a row of chained arrows –
the superpower grafting square
.
Summarizing our proof for the superpower grafting square
We knew by
analogy
with Og0
that Og1
should cover the next superpowertype subclass
with its first row of parameters.
We defined the operatorial Qy
(an altered form of chained arrows) to
show
that 4 parameter Qy
and a specific expression over the row of
Og1
are equally fast.
In this chapter we extrapolated the method of grafting so it could be used recursively,
taking n2
parameter Qy
as a stock for the first n2
parameters of the next function Ogn
covering the
n1
th recursive subclass.
Because Qy
is exactly expressible by functions
Ogn
and because Qy
uses similar rules as Conway's chained arrows
(and does not increase significantly faster),
it follows that both methods cover the AckermannConway level,
where every n2
th chained arrow →
signifies the n
th Ackermann jump.
# 5.2. Front row bigE
§5.2.1. Home of subscripted arrows
The first row of parameters of the operatorial function bigE continues the
basic definitions
for 3 parameters from chapter 4,
according to the plan introduced by the general formula for subscripted
majority arrows
in chapter 3.
Because bigE is our majority arrow function,
the first 4 parameters a,b,c,d
in E()
are equal to the operations on item a
with iterator b
,
where c
counts the arrowhead operators ^
typed by a
single subscript
d
.
The fabulous fourth parameter d
takes us beyond superpowers and is explained below.
 E(a,1,c,d) = a
 E(a,b,1,1) = E(a,b,b) = a^_{1;}b = a^^{...}b {^#b}
 E(a,b,1,d1) = E(a,b,b,d) = a^_{d1;}b = a^_{d;}^{...}b {^#b}

E(a,b1,c1,d) = E(a,E(a,b,c1,d),c,d)
= a^_{d;}^{...}b1
{^_{d;}#c1}
== E(a,..a.,c,d) {#b#} = a^_{d;}^{.. }... {^_{d;}#c a#b1}
The definition of the first row of parameters of bigE follows the same pattern.
 E(a,b,1,...) {1#k k>1} = E(a,b,...) {b#k}
 E(a,b,1,...,n1,R) {1#k k>0} = E(a,b,...,n,R) {b#k1}
 E(a,1,c1,R) = E(a,1,c,R) == E(a,1) = a

E(a,b1,c1,R) = E(a,E(a,b,c1,R),c,R)
= a^_{R;}^{...}b1
{^_{R;}#c1}
== E(a,..a.,c,R) {#b#} = a^_{R;}^{.. }... {^_{R;}#c a#b1}
Recursive scheme Nº2 for bigE with a single row of parameters of arbitrary length.
What's so great about the expansion of bigE
is that its reduction is such a lengthy undertaking.
Before the final parameter z
is counted down
a lot of water has flowed under the bridge and the basic iterator b
is fat and juicy, spawning its value high up
to replace the penultimate parameter y=1.
This makes numbers grow very big indeed, much bigger than those defined by Conway's
chained arrow notation.
The rule that a lower positioned iterator (left parameter)
substitutes for an iterator in higher position (right parameter),
can be taken as the defining rule of the operatorial functions
of which bigE and
bigA
are prominent examples.
This so called uploading was deployed in the same way
by the ^_{R;}
arrow
operator subscripts for bigE. The
multiple
substitution of parameters in this type of uploading is less important.
§5.2.2. Race against chained arrows
We've explored the widening gap between
bigE's
subscripted arrows and Conway's chained arrows in an
example in chapter 3.
Subscripted arrows win the race, that's not the point,
but how fast are both functions exactly?
The definition for the first row of
chained arrows
is repeated here.
 a→b = a^b
 a→b→c = E(a,b,c)
 R→1 = R
 R→1→z = R
 R→y1→z1 = R→(R→y→z1)→z
Compare or approximate chained arrows with bigE past 3 parameters,
like we expressed
bigE in bigI.
Using the following equations we positioned Graham's number
between
two expressions of bigE.
 a→b→c→1 = E(a,b,c)
 a→b→2→2 = a→b→(a→b) = E(a,b,a^b) <~ E(a^b,a^b,1,1)
 a→b→3→2 = E(a,b,E(a,b,a^b)) <~ E(a^b,3,2,1)

a→b→c1→2 =
a→b→(..a→b.)
{#c#} =
E(a,b,..a^b.)
{#c#}
<~ E(a^b,c1,2,1) = E(a^b,v,v) {v:E(a^b,c,2,1)
}
Because the final parameter tends to dominate the result,
we inferred that the terms left and right of the
predecessor sign
<~
are close enough (approximate) within this context.
But for small extreme values where a^b < 5
these equations do not offer proper (algorithmic) estimates.
 a→b→2→3 = a→b→(a→b)→2 <~ E(a^b,a^b,2,1)
 a→b→3→3 <~ E(a^b,E(a^b,a^b,2,1),2,1) = E(a^b,3,3,1)
 a→b→c1→3 = a→b→(..a→b.)→2 {#c#} <~ E(a^b,c1,3,1)
 a→b→c1→d1 = a→b→(..a→b.)→d {#c#} <~ E(a^b,c1,d1,1)
The question is how bigE's 4th parameter increases,
while chained arrows expand to higher parameters.
For extra examples,
click here!

a→b→c→2→2
=
a→b→c→(a→b→c) <~
E(a^b,c,E(a,b,c),1)
<~ E(a^b,E(a,b,c),1,2) <~ E(E(a,b,c),2,2,2)  a→b→c→3→2 <~ E(a^b,c,a→b→c→2→2,1) <~ E(E(a,b,c),3,2,2)
 a→b→c→d→2 <~ E(E(a,b,c),d,2,2)
 a→b→c→2→e1 <~ E(E(a,b,c),a→b→c,e,2)
 a→b→c→d→e <~ E(E(a,b,c),d,e,2)

a→b→c→d→2→2 <~
E(E(a,b,c),d,E(a^b,c,d,1),2)
<~ E(E(a,b,c),E(a^b,c,d,1),1,3) <~ E(E(a^b,c,d,1),2,2,3)  a→b→c→d→e→f <~ E(E(a^b,c,d,1),e,f,3)
 a→b→c→d→e→f→g <~ E(E(E(a,b,c),d,e,2),f,g,4)
 a→b→c→d→e→f→g→h <~ E(E(E(a^b,c,d,1),e,f,3),g,h,5)
It has become obvious that bigE covers the first row of chained arrows
with its first 4 parameters.
Interestingly we've seen this phenomenon with minority stars versus
majority arrows in
chapter 3.1 –
providing testimony that bigI is as fast as a whole row of chained arrows,
merely by appending a unit value I(a,b,c,1)
_{ } _{ } <~ E(a_{i}^..,y,z,n) or I(a_{i}*..,yz,n,1) {a_{i}#n}
When n=0 in the above formula we have
E(0,b,c,) < b→c
and from this we conjecture that
E(a,b,c,)
= E(b,c)
and by inverse construction that a negative final parameter value of *z
let drop a number z
of leftmost parameters
starting with item a
.
# 5.3. Front row bigI with waitor
A thousand heads has Purusha,
a thousand eyes, a thousand feet.
On every side pervading earth
he fills a space ten fingers wide.
This Purusha is all that yet has been
and all that is to be...
– Rig Veda 10:90
Under 2011 Construction
§5.3.1. Home of subscripted stars
An operatorial bigI which covers
mixed minority stars
in a single row of parameters is harder to achieve.
With one typesubscript d
stars pass by
a whole row of arrow subscripts
or parameters d,..,z
of
bigE.
It seems the function concept itself
is too slow to contain bigI's operator expansion!
Let's see what actually happens when
mixed stars reduce
and mimic that behaviour in an augmented form of bigI.
_{ } _{ } _{ } = a*_{1;}*_{1;}*^{..}(a*_{1;}*_{1;}*^{..}b) {*#b *#b}
_{ } _{ } _{ } == a*_{1;}*_{1;}*^{.. }... {*#b a#b}
I(a,b,c1,1) = I(w,b,1,1) {w:
I[a,1,c,1]
} =
I(w,b,b)== I(w,..a.,b) {#b#}
This new functional construct w
is called a waitor
and is inserted in an expression O(w,Z)
in the format
O[X]
with special waitor brackets,
meaning that it must wait before certain rules can be invoked for its evaluation.
Normally inner expressions can (or must) be reduced to natural number first,
but not waitors, which have a lower functional precedence
depending on the current situation (reduction phase) of its outer expression.
When in an operatorial bigI I(a,b,c1,R) = I(w,b,1,R)
a waitor w = I[a,1,c,R]
is substituted
for the item parameter a
(in 1st outer position),
this item waitor is not a function,
but it is iterated over and waits until all parameters except the initial
iterator (in 2nd outer position) are resolved.
With just two parameters left the next reduction step will substitute the inner value
b=1 in w
for the outer accumulated value b'
so that I(w,b')
becomes a normal operatorial I(a,b',c,R)
and the process can repeat itself from the start.
But in all other outer positions (except the 1st)
an iterator value is necessary to drive the reduction train, and
the offspring iterator parameters are evaluated first as
I[a,1,R] = a
(as usual).
With the help of a train of item waitors w
,
the rules for the first row of operatorial bigI can thus be formulated.
This continues the basic definition for bigI up to
3 parameters.
 I(a,b,1,...,n1,R) {1#k k>0} = I(a,b,...,n,R) {b#k1}
 I(a,1,c1,R) = I(a,1,c,R) == I(a,1,1)

I(a,b,c1,R) = I(w,b,1,R)
{w:
I[a,1,c,R]
}
= a*_{R;}^{..}b {*_{R;}#c1} = (a*_{R;}^{..})*_{R;}^{..}b {*_{R;}#c *_{R;}#b} 
I(w,b) {w:
I[a,1,R]
} := I(a,b,R)
Below we'll show how this construction with waitors covers superpowers too,
provided that
multiplication
is reduced to repeated addition by a separate rule –
the repetition step of multiplication –
to precede the above definition.
Consequently the higher rules for I(a,b,c) {c>1}
were justly marked as
provisional
before.
Because waitor items can't just be multiplied
and have to be substituted after a single repetition step,
both step and the result of multiple steps – namely multiplication –
are individual constructs in the evaluation of expressions of bigI.
This means that
addition
keeps its status as root axiom
and that a primary extra axiom must be inserted here.
Without multiplication rule I(a,b,1)
would explode via I(I[a,1],b,1)
to irreducibility!

I(a,b1,1) =
I(a,I(a,b,1))
== I(a,..a.) {#b#} = a... {a#b1 0_{*}} = a*b1 
I(a,b1,c1) =
I(w,b1,1)
{w:
I[a,1,c]
} = I(w,I(w,b,1))
:= I(a,I(w,b,1),c) == I(a,..I(w,1,1).,c) {#b#}
= I(a,..a.,c) {#b#} = a*^{..}b1 {*#c1}
Recursive scheme Nº2 for bigI with a single row of parameters of arbitrary length.
 1.1. Root axiom: addition [axiom in basic scheme]
 1.3. Primary axiom: repetition step [case in basic scheme]
 2.1. Partial upload: upward substitution of a range of parameters
 2.2. Parameter destruction: row iterator collapse
 2.3. Waitor introduction: row item substitution => 1.4
 2.4. Waitor elimination: row iterator expansion => 1.5
Under 2011 Construction
# 5.4. Front row bigA
§5.4.1. Comparing pluses with Fa via Fb
We've seen in
chapter 4.2
that bigA's basic 3 parameters are equal to the
superpower function Fb(a,b,c)
,
but the issue still stands whether bigA equals Fb
for its expansion over the whole first row.
We'll deal with the issue here, and show how this depends on your choice of axioms.
Recapitulate the rules we've established for
unary pluses
in the definition of bigA's front row.
 A(a,1) = a+ = a1
 A(a,b1) = a+^{..} {+#b1} = a1+^{..} {+#b} = ab1
 A(a,1,1R) = a+_{1R;} = a+_{R;}^{..} {+_{R;}#a} = A(a,a,R)

A(a,1,...) {1#k1} =
A(a,1,..,a1) {1#k>0}
== A(a,...) {a#k1} 
A(a,1,...,1R) {1#+k1} =
A(a,1,..,a1,R) {1#k>0}
== A(a,..,R) {a#k2} (see +_{R;} def. list upload rule) 
A(a,b1,R)
= a+_{R;}^{..}
{+_{R;}#b1} =
(a+_{R;})+_{R;}^{..}
{+_{R;}#b}
= A(A(a,1,R),b,R)
The unbracketed unmixed right minority function Fb
is to be derived from Fa
again.
We'll begin by defining the first row of bracketed unmixed
subscripted right minority operators for function Fa
.
As with the unmixed subscripted
majority arrows for
bigE,
we'll use a notation where operator counter c
is prefixed to the row of subscripts, so that
×_{c,R;} =
×_{R;}.. {×_{R;}#c}
Presume the generic axioms, and that
addition
applies to the 2 parameter cases Fa(a,b) = Fb(a,b) = ab
The first rule in the list covers the case Fa(a,1,1)
and the main axiom covers Fa(a,b,1) = a*b1
 Fa(a,1,c1,R) = a×_{c1,R;}1 = a×_{c,R;}a = Fa(a,a,c,R)

Fa(a,b1,c1,R) =
a×_{c1,R;}b1 =
a×_{c,R;}(a×_{c1,R;}b) ==
a×_{c,R;}.. {a#b11}
= Fa(a,Fa(a,b,c1,R),c,R)  Fa(a,b,1,..,1R) {1#k k>0} = Fa(a,b,..,R) {b#k1}
In this row definition of Fa
we've copied the last rule straight from the
partial upload
axiom for bigE.
Again Fb
follows from
unbracketing
(_{0}
the main rule of Fa
in combination with
minority precedence.
Keep the other two axioms the same for Fa
and Fb
,
we won't repeat those here, only the main rule for first row Fb
is shown.

Fb(a,b1,c1,R) =
a×_{c1,R;}b1 =
(a×_{c,R;}a)×_{c1,R;}b
= Fb(Fb(a,a,c,R),b,c1,R)
Again
we can check that the main rule of A(a,b1,c1,R)
and Fb
is the same for c>1.
But case c=1 is different, because the
upload
axiom of bigA is more powerful.
Fb(a,b1,1,1R) = Fb(a,b1,b1,R) = Fb(Fb(a,a,b,R),b,b1,R)
Ψ.6. Dimensions
Under 2011 Construction
# 6.1. Bowers' extended arrays
The ancients have used pillars to prop up the teachings since time immemorial,
and Zen practitioners through the years have made them a nesting place.
Linji wants to shake the tree and loosen the nest,
so he points to a pillar and asks, "Is this ordinary or sacred?"
If you understand it the instant it is uttered you are free and unhindered.
But if you hesitate, you have fallen into the pit of brambles...
Do you understand?
When affirmation and negation have merged,
there is no way to speak of it.
The truth of ordinary and sacred, wherever it is encountered,
is, after all, in your hands alone.
– Loori's True Dharma Eye 300.
§6.1.1. Front row BEAF
Thinking of operatorial dimensions Bowers' Exploding Array Function
comes to mind.
But what does Jonathan Bowers actually do?
And how fast does BEAF increase?
In this subchapter you'll learn how the array notation from the Hedrondude
website
works out up to multiple dimensions.
This is followed in
chapter 7.2
by a rigorous theorem of the dimensional grafting cycle
Bowers wellnigh envisions, when he rewrites a box of parameters of
n
dimensions to its abstract form X^n
of exponentiation – the very brick he started building from!
Thereupon he suggests we should call his master function
BEAF
and that its rules, "..can be continued as long as the structures
[dimension boxes]
can be defined."
We won't keep the typical curly braces {X}
that Bowers applies in his latest versions of the BEAF
(he used angled brackets <X>
in the early days).
Instead in the rules and calculations in this chapter all expressions of
BEAF are rendered in minipopformat with
`X`
database brackets.
In this format we can opt to leave away the outer brackets.
Actually the BEAF is just a function, albeit a dimensional one,
and in a function comma separated sequences of parameters are usually delimited
by round brackets (X)
, so those would make a most intelligible choice.
On the first row BEAF(a,b,c,d)
runs parallel with Bowers'
extended operators
a.{.c}..b
{#d#}
so we can just skip this single entry operator
story.
Remarkable (but unpractical) is that both superpower operators
and dimension separators in BEAF
can be quantified by brackets instead of (subscripted) coefficients.
We start with the five axioms that define Bowers' front row of array entries.
The simpler rule for the reduction of a,b
is listed first,
despite that the 2nd rule covers the initial case `a,1`
= a
and is arguably more fundamental. The same priority issue concerns rules 4 and 5,
but by convention the dominant main rule is listed last.
Note that in an earlier version of his array notation Bowers evaluated
`a,b`
as addition, and that he later changed this to
exponentiation as a strict application of the 2nd rule.
 BEAF {}

a,b1 =
`a,b`.. {#a 0_{*}} ==
`a,1`.. {#a**b 0_{*}}
= a.. {#a**b 0_{*}} = a^b1  R,1 = R
 a,1,R = a

a,b1,1,..,11R {1#k} =
a,..,`a,b,1,..,11R`,1R {a#k1 1#k}
== `a,..,.a.,1R`.. {a#k1 #b#} 
a,b1,c1,R {b>0 c>0} = a,`a,b,c1,R`,c,R
== `a,.a.,c,R`.. {#b#}
BEAF starts off very fast, and speeds away
from Conway's chained arrows in the
4th parameter.
Characteristic is BEAF's upload rule (4th axiom above).
It transports the accumulated value in b
to the high end of the parameter row,
where the rightmost consecutive entry with value 1
is replaced by a direct predecessor
(minimally counted down copy) of the original expression.
Iterator b
is still there, contained in the inner expression,
whose evaluation will magnify b
zillionfold before it takes its position as higher iterator.
Our own upload rule for
bigE
applies a similarly elevated substitution,
but the top substitutable parameter is simply replaced by the value of b
sent straight up the row (not as a subexpression).
BEAF's upload gives it a head start with respect
to the value of its dominant rightmost array entries.
This makes the countdown of its first row very slow –
resulting in Bowers' proverbial infinity scrapers
.
And what are infinity scrapers?
They are numbers which are so big, that even though they are finite, they seem to be reaching for infinity, like a sky scraper is a building that reaches for the sky.J. Bowers (Hedrondude) on Big numbers
Bowers had the bright idea to
upload
the lowest array entries a,b
to the higher entries
when those are counted down fully.
This supersedes the mechanism of a,2,c1
= a,a,c
with supersquares.
But in one sweep he replaces the accumulated value of the iterator b
by the small item a
and that's not optimal.
Generally
multiple substitution
doesn't significantly increase function speed,
but when Bowers substitutes each parameter in the left outer row
for item a
, this seems to jam the motor of the iteration train.
A safer approach offers the upload method of
bigE,
where the accumulated value of b
replaces all parameters 1
in series.
To count down E(a,b1,2,..)
by
bigE's main rule
creates the same nesting as BEAF(a,b,2,..)
counted down
with the single nest of its elevated subexpression.
Consider that most b
are Big numbers,
then the effect of adding 1
is negligible.
But the rewards from multiple substitution shouldn't be overrated too.
So at this stage it's far from clear how
bigE translates to BEAF.
In the next section we'll get our hands dirty and make the necessary calculations.
§6.1.2. Company on the Bowery
Now we will approximate Bowers' extended array function
BEAF with Conway's chained arrows
→before, and our operatorial bigE (after) on the first row.
This is tough, but we've expressed Conway's chained arrows in bigE back in
chapter 5.2,
that helps.
To show extra examples in between stations
click here!
 a→b = a,b = E(a,b,1) = a^b
 a→b→c = a,b,c = E(a,b,c) = a^^{..}b {^#c}

a→a→2→2 =
a→a→(a→a)
<~
a,3,1,2 = a,a,`a,2,1,2`,1 = a,a,`a,a,a` <~
E(a,3,2,1) = E(a,E(a,a,a),E(a,a,a)) 
a→a→b→2 =
a→a→(a→a→b→2) ==
a→a→(..1.)
{#b#}
<~
a,b1,1,2 = a,a,`a,b,1,2` == `a,a,..a.` {#b#} <~
E(a,b1,2,1) = E(a,v,v) {v:E(a,b,2,1)
}
== E(a,v_{i},..a.) {v_{i}:E(a,bi,2,1)
#b# i≥0} 
a→a→b→3 ==
a→a→(..1.)→2
{#b#}
<~
a,b1,2,2 == `a,..a.,1,2` {#b#} <~
E(a,b1,3,1) == E(a,..a.,2,1) {#b#} 
a→a→b→c1 ==
a→a→(..1.)→c
{#b#}
<~
a,b1,c,2 == `a,..a.,c,2` {#b#} <~
E(a,b1,c1,1) == E(a,..a.,c,1) {#b#} 
a→a→a→a <~
a,2,1,3 = a,a,a,2
<~ E(a1,2,2,2) = E(a1,a1,a1,1)
While chained arrows exhaust their 4th parameter in full,
BEAF requires just a single value d=2.
So the latter already increases much faster, a trend which
continues
for the rest of Conway's row – each appended chained arrow best approximated
by the next value of entry d
in BEAF.
On Bowers'
website
the proof that BEAF
with its 5th entry far surpasses a whole row of chained arrows
is attributed to Chris Bird.
What follows must be similar to Bird's proof –
we translate a row of chained arrows to successors in
BEAF and bigE.
Iterated reductions ==
are notated with an ordinary equals
=
sign from now on.
For more examples
click here!

a→a→a→2→2 =
a→a→a→(a→a→a)
<~
a,3,1,3 = a,a,`a,a,a,2`,2 <~
E(a1,3,2,2) = E(a1,v,v,1) {v:E(a1,2,2,2)
} 
a→a→a→b→2 =
a→a→a→(..1.)
{#b#}
<~
a,b1,1,3 = `a,a,..a.,2` {#b#} <~
E(a1,b1,2,2) = E(a1,..a1.,1,2) {#b#} 
a→a→a→b→c1 =
a→a→a→(..1.)→c
{#b#}
<~
a,b1,c,3 = `a,a,..a.,c,3` {#b#} <~
E(a1,b1,c1,2) = E(a1,..a1.,c,2) {#b#} 
a→...→b→2 {a#d} =
a→...→(..1.)
{a#d #b#}
<~
a,b1,1,d = `a,a,..a.,d` {#b#} <~
E(a1,b1,2,d) = E(a1,..a1.,1,d) {#b#} 
a→...→b→c1 {a#d} =
a→...→(..1.)→c
{a#d #b#}
<~
a,b1,c,d = `a,a,..a.,c,d` {#b#} <~
E(a1,b1,c1,d) = E(a1,..a1.,c,d) {#b#} 
a→... {a#a2} <~
a,2,1,1,2 = a,a,a,a
<~ E(a,2,2,1,1) = E(a,a,a,a)
Because bigE too uses parameter d
to cover the full row of chained arrows,
we think BEAF will approximately match bigE
in velocity over the whole front row.
To prove this we need only compare expressions
where these two operatorials differ significantly,
that is for cases where the upload rule determines the result.
Below are two of those –
for examples of earlier cases
click here!

a,3,1,1,2 = a,a,a,`a,a,a,a`
<~
E(a,3,2,1,1) = E(a,v,v,v) {v:E(a,a,a,a)
} 
a,b1,1,1,2 =
`a,a,a,..a.`
{#b#}
<~
E(a,b1,2,1,1) = E(a,v,v,v) {v:E(a,b,2,1,1)
}
= E(a,v_{i},v_{i},..a.) {v_{i}:E(a,bi,2,1,1)
#b# i≥0} 
a,b1,c,1,2 {c>1} =
`a,..a.,c,1,2`
{#b#}
<~
E(a,b1,c1,1,1) = E(a,..a.,c,1,1) {#b#} 
a,2,1,2,2 = a,a,a,1,2
<~
E(a1,2,2,2,1) = E(a1,a1,a1,1,1) 
a,b1,1,d1,e1 {d>0} =
`a,a,..a.,d,e1`
{#b#}
<~
E(a1,b1,2,d1,e) = E(a1,v,v,d,e) {v:E(a1,b,2,d1,e)
}
= E(a1,v_{i},..a1.,d,e) {v_{i}:E(a1,bi,2,d1,e)
#b# i≥0} 
a,b1,1,..,2 {1#k} =
a,..,`a,b,1,..,2` {a#k1 1#k}
= `a,..,..a.` {a#k1 #b#} <~
E(a,b1,2,1,..) {1#k} = E(a,v,..) {v#k1 v:E(a,b,2,1,..)
1#k}
= E(a,v_{i},..,:.a.) {v_{i}:E(a,bi,2,1,..)
1#k v_{i}#k #b# i≥0}
The equations to compare BEAF and bigE
along the first row have finally reached their general and exact form.
Provided that value a
is sufficiently large,
both operatorial functions increase at roughly the same speed.
At least a>2 because accumulated values v
are supposed to be very Big, as the length of the reduction cascade indicates.
We feel the rule for single
upload in BEAF
is less pretty than multiple
upload in bigE.
Whether first rows starting with a=2 must increase explosively or extinguish
after many ages
to a mere 4
is a matter of taste.
E(2,b,c,R) {b>2} == E(2,v,v) >> 4 (much larger than
4
)
By analysing their respective upload rules we could have concluded
directly that BEAF and bigE are about equal.
Let's try and see what happens in the BEAF expression
if we add 1
to the huge accumulated value v
after
it is uploaded (as a thought experiment, the reduction rules don't cause this).
The seemingly insignificant 1
spawns a series of parameters
x_{i}
that dwarf the v1
subexpression,
because each contains nest upon nest of v
in its elevated position.
Such is the power of a high iterator.
= `a,..,x,v,z` {a#k x>v1}
== `a,x_{i},..,v,z` {x_{i}#k x_{i}>x_{i1}>v1}
~> E(a,v,..,z) {v#k1 z≥1}
So there is no significant difference between the rules of BEAF
and bigE from the perspective of the length of a row.
Neither walks away from the other, except in the details
(still there are major gaps between numbers that Big).
The general comparison for first rows of BEAF and bigE follows.
Note that cases c=1 get represented on both lines.
 `a,b,c,1,...,2` {1#k} <~ E(a,b,c1,1,...) {1#k1}
 E(a,b,c1,R) {R:≠1,..} <~ `a,b,c,R1` <~ E(a1,b,c1,R)
An equal final value z
gives bigE the edge,
secondarily the value of the 3d entry c
favours BEAF, and if all else is comparable
Bowers' base entry a
has the upper hand.
These differences are not so significant for the whole of the first row,
and certainly have no import in higher dimensions (plane, cube, etc.)
or what lies beyond…
§6.1.3. Folding dimension boxes
Jabe Bowers
Tyler TX 1969
The way Jonathan Bowers explains his extending array notation past the first row is rather chaotic, supernumerary yet incomplete. We must prop up his list of rules to cover every possible expression in BEAF – see our rule 3 which drops a lonely dimension entry. Later we'll take advantage of the omission as we try to give BEAF a significant boost by reexpanding dimensions before they are counted down completely.
Bowers' principal explosive device is a standard dimension box
D[a,b,c]
with base
items a
filling all entries, sizes set equal to the accumulated value of prime
iterator b
and dimensions symmetrically nested and counted by an
inline
number c
which functions at a deeper level.
Follows a recursive definition with two simple rules
to construct dimension boxes.
For examples
click here!
 D[a,b,1] := a,.. {a#b} (initial row)

D[a,b,c1] =
D[a,b,c][c]... {
D[a,b,c]
#b} (subdimension) 
example
D[5,3,2] :=
5,5,5[1]5,5,5[1]5,5,5
< D[4,2,3] := 4,4[1]4,4[2]4,4[1]4,4
On Bowers'
website
bracketed numbers (n)
are adopted as separators between n
th dimensional subarrays.
We will write [n]
instead,
which is commonly restricted with n>0 to positive integer values
(in BEAF), and adopt the convention that
[0]
or []
stands in for
,_{0;}
or ,
the single comma
(uncountable by default).
We can also notate a dimension box with a
dual repetition
a,....,
with an extra ,
to append its closing
,^{..}
{,#c1} separator series.
To show this alternative definition,
click here!
 D[a,b,0] = a and [0] : ,
 I[A,b,c1] := A[c].. {A#b} (subdivision)

D[a,b,c1] =
I[D[a,b,c],b,c1]
== I[.a.,b,i].. {#c1#} (dimension box) 
D[a,b,c][c] :=
a,...., {,..#c #b}
(closed dimension box)
example D[3,3,2][2] := 3,3,3,,3,3,3,,3,3,3,,,
Written in the format F[X]
of a parametrized
wildcard
or word function,
dimension boxes D[a,b,n]
are introduced in BEAF by the axioms 6 & 7 shown below.
Those subsequences are not preevaluated, but substituted as such
[by :
words, not as =
values)
for the reduced first row a,b
in the outer expression.
Bowers'
five axioms
for the front row can simply be extended to multiple dimensions.
This foundation is supplemented by
two rules
which are meant to explode
the counted down base a,b
by substituting it for a dimension box.
Only at first a series of boxes is issued when multiple
[m_{i}]..
are reduced.
Follows the definition list of dimensional BEAF. To select the rules to introduce dimension boxes click twice!
 BEAF {}
(preceding rules
take precedence,
wordZ
contains any allowed characters, wherepZ := p[m]Z {p>0 m≥0}
)  a,b = a^b (initial rule)

a,1[m]Z {m≥0} = a
(collapse if 2nd entry on 1st row is
1
) 
A1[m_{i}]..1[n]Z {m_{i}≥0 n>0} =
A1[n]Z
(chop off any trailing entry
1
)
so A1,..[n]Z {1#k n>0} = A1[n]Z (drop last row entries1
)
A[m_{i}]..1 = A (whenZ
is empty, given thatA[n_{i}].. = A
)
a[n_{i}]..Z {n_{i}>0} = a,1[n_{i}]..Z = a (reverse rule) 
a,b1,1,...,2Z {1#k} =
a,...,c,1Z {a#k1}
(upload on 1st row)
with c = `a,b,1,...,2Z` {1#k} 
a,b1,2Z = a,c,1Z
(main rule)
with c = `a,b,2Z` 
a,b1[n_{i}]..1,..,p1Z
{
[n_{i}]
#s 1#k1 p>0} =
D[a,b1,n_{i}][n_{i}]..a,..,c,pZ {#s a#k} (explosive dimensional upload)
with c = `a,b[n_{i}]..1,...,p1Z` {[n_{i}]
#s 1#k1} 
a,b[n_{i}]..p1Z
{
[n_{i}]
#s} = D[a,b,n_{i}][n_{i}]..pZ {#s} (explosion)
each D[a,b,n_{i}] as defined above
Note that the rules in a definition list apply in the order given.
For example in the last two axioms, if s=1 the condition that
n_{1}>0 need not be stated explicitly,
because the 4th and 5th axiom evaluate expressions where n=0 first.
Also a character sequence like 1Z
or pZ {p>0}
should be read as p[m]Z
which can simply refer to p,1Z
or generally expand to p[m_{j}]^{..}..1Z
(separator series are discussed
below),
or reduce to p
in case Z
is empty.
Bowers' inline notation [n_{i}]..
opens up a new level, where a maximum number n
cannot increase by evaluation, although predecessor subexpressions
are nested on the right and dimension boxes
m≤n
inserted left.
Inline separators [n1]
repeatedly generate boxes with many
[n]
in the evaluation of BEAF,
so there's no need for a rule to count down separator coefficients. In
rule 3
we assume that Bowers drops them flat when their time has come, that is
when the array on the right side of the separator crashes to
1
.
Airplane analogy
We think of p {p>1}
as the outer iterator,
because it is to be counted down in the outer expression.
Bowers calls this entry the pilot
,
when it's the first iterable value right after prime entry
b
.
He also invented the concept of a copilot
,
which is the outer receptor directly left of the pilot.
The copilot c
is most important as a substitute in the constructive
earlier rules.
But in the last rule above there's no parameter fit to be copilot,
because the pilot entry is seated in first position on the row.
All other entries in this airplane
analogy of Bowers are called passengers
.
But the heavy ones on the right of the pilot have to wait a long time –
stranded on the airport
sequence Z
– before their flight arrives!
§6.1.4. Diacritical remarks
In this section we want to make the transition to the operatorial format lighter
for readers who are more familiar with Bowers' system.
His extended arrays use a different notation
than our Big operatorials and the construction of
dimension boxes
normally isn't our foremost objective, but overall their habitat is the same.
We translate the inline bracketed separator [n]
to a subscripted comma ,_{n;}
with depth delimiter,
so that the number of separated array dimensions n
is equal,
and the function of the support characters ,
for [
to open and _{;}
for ]
to close the dimension enumerator is the same.
Because subscripts can be understood without _{;}
we'll usually omit the closing semicolon, but under the hood it is still there
(at least if we allow single ,
separators).
In BEAF a series of inline
separators is torn apart at the very first reduction step – typically
rule 7
– that addresses it,
sprinkling individual separators [n_{i}]
in between the dimensional arrays each one of them helps define.
The shortlived inline separators
[n_{i}]^{.. }.:.
{n_{i}>n_{i1}}
that Bowers mentions are at best a precision notation
for expressing various dimensional subsequences.
His mixing apparatus is not robust enough to create significantly Bigger numbers,
but it does offer a better resolution
(is more precise) than our simplified (initially additive) commas.
Bowers' idea to mix numbers of separators feels natural.
Our own translation covers all dimensions of BEAF
with countable and mixable separator commas.
These can either be notated as a sequence ,^{..}
of length c1
or enumerated ,_{c;}
with a single coefficient.
Here our separator system starts with a type of comma mixing
that is similar to counting, simply adding
the initial coefficients as defined below…_{..}
,_{ci;}^{.. }.:.
{,_{ci;}#k_{i}
,_{ci;}^{..}#m 0_{*}} =
,^{..} {,#n}
_{ }^{ }
_{ } _{ } _{
}^{ } _{ }
= ,_{n;}
:= [n]
where
n = (c_{i}1*k_{i})... {#m 0_{*}}
example
a,b,_{2;},_{2;}p1 =
a,b,,,,,,p1 =
a,b,_{5}p1
_{ } _{ }
:= a,b[5]p1 =
D[a,b,5]p
a,b[2][2]p1 =
D[a,b,2]D[a,b,2]p
Take
for example
a sequence of [2]..
in Bowers' system.
When we count a single dimension higher with
,_{3;}
or ,,,,
this easily surpasses all physically expressible BEAF
made up of lower type than [3]
separator series.
Compare a few expressions (using various separator styles) as
an exercise.
= a,a,,a,a,,,q {1<q≤
{a,2[2]..p} [2]
#s}
<`a,2[2]..p1` {
[2]
#s} =
a,a,,a,a,_{2;}...p {#s p>1}
<`a,2[3]p1` = a,a,,a,a,_{2;}a,a,,a,a,_{3;}p {s≤
{a,2[3]p}
}= a,2,_{3;}p1 = a,..,..,..,p {#2 #2 #2}
_{ } = a,....,p {,..#3 #2}
The
dimension box
a,...., {,..#n #b}
substituting for
a,b,^{..}
{,#n1} is rock solid.
It contains b^n
parameter positions that first have to be counted down
from a
and next from increasingly Big b'
refilling those passenger
entries each time 1
is reached.
Bowers uses that b^n
as a name to classify dimension boxes.
If we'd write a,_{i;}..:.,_{n;}
{,_{i;}..#n #b i≥0 0_{+}}
to define such boxes by counting dimensions with a coefficient,
we fall down to the seemingly absurd context 0_{+}
where subscripted commas do not add but relate by zeration.
For to make the
dual repetition
work, the rightmost of a series of commas
,_{p;},_{q;} = ,_{q;}
must be left over.
However, we expect a plain repetition as for example
a,a,^{.. }..,
{,#n #b^b}
to increase faster, because almost b^b
more pockets a
can be refilled, given that b
outgrows n
quickly.
And b^b
on such a scale means next to nothing
– adding an extra 1
to c
in an earlier expression frankly already has Bigger consequences.
So dimension boxes are beautiful and characteristic for
BEAF, but to erect the whole structure
with all the lower dimensions in place is less convincing than
a simple but oversized array of penultimate dimensions.
The higher dimensions spread out to the lower anyway…
What matters most is the
main
engine of the first row.
And only when the "main row" is eventually count down to two parameters,
the accumulated value of prime iterator b
explodes into a new oversized
dimension box.
One by one every subdimension is then counted down from the left
until it is discarded.
Here we've met with a shortcoming – apart from the so called prime block
where explosions to multiple dimensions of size b
take place –
dimension sizes can only be counted down in BEAF,
not expanded directly.
# Boosting BEAF
Instead of just collapsing counted down dimensions,
a better deal would give a rule that upgrades the size of existing dimensions.
Subarrays can freely be increased when they are positioned left of a higher
pilot
(outer iterator) as it is counted down in a long recursive loop.
This means that (except for the unique final row) BEAF
should keep a single entry on rows ending on 1
to be able to refill them and expand their size by Bigger b
.
First we limit our
old axiom
to a simpler
version
that leaves room for the resurrection of counted down rows.
Then the new axiom follows, which performs three actions:
On the left it replaces a,b
by a prime block
in Bowers' grand style.
In the middle it reinflates further dimensions consisting of single parameters
1
with our method of
repeating
penultimate dimensions.
On the right its "copilot" value b
may look small and simplistic,
but we've upheld that its strength is comparable to Bowers' inner expression in
section 6.1.1 before.
Thy mighty triple formula…
(Brahma the creator, Vishnu the preserver and Shiva on the outer side :3)
 A1,1,,Z = A1,,Z

a,b,_{m;}1,_{nj1;}...,_{p;}2Z
{1#k1 n_{j}>0 p>0} =
a,....,a,_{nj;}..,_{nj1;}.:.,_{nk1;}b,_{p;}1Z {,..#m #b a#b a,_{nj;}..#k}
We don't define series a,b[m_{i}]..
here like Bowers tends to do.
This obscure and evanescent construct in his notation should reduce to the same
s
subsequent dimension boxes as by
rule 6 & 7
of BEAF.
Note that at the illipsis .:.
two new comma series (typed by
n_{j}
and n_{j}1
)
are inserted at each step
j to k.
Exploding "orphan passengers" 1
to a series of a
within predecessor n_{j}
dimensions of size b
delivers a significant boost to BEAF, we hope.
It protects arrays from being eaten away slowly from the left,
as long as "crashed pilots" can be brought back to life
and restart their career.
Under 2011 Construction
§ The Biggest number
So what's the Biggest number?
That depends on the kind of system you'd like to represent your numbers in
and the resources (time, money) converted to that system.
Suppose you'd only allow a number of fingers stuck up in the air,
counted off verbally by a person walking by.
Already a lot of motivation and organisation is required
to move past the number 1000,
for which more than a hundred foolish people
would have to hold up both hands for up to an hour.
Dependent on the verbal skills of the person counting
and the perseverance of the last batch of participants,
we estimate the Guinness record of physical finger counting
at no more than ~ 10^5.
On the other hand you might approve of decimal or hexadecimal numbers
being printed out on sheets of paper.
Then let me tell you, there have been print outs of the digits of the number Pi
running past the million. And if you're prepared to fell some more trees –
with 5000 signs on an A4 sheet, double sided,
500 in a package, you'd need 1661
packages to machinewrite a hexadecimal number of size ~ 10^10^10.
We'd advise against it. Why not allow digital storage instead? On your
1 TB hard disk about 2^48 bits are available
to write a number as large as ~ 10^10^14.
For the same reason we expect the blueray DVD
won't turn out to be a commercial success,
we estimate the digital capacity eventually used by the human race
not to exceed 2^128 bits –
with shared computing this implies a practical maximum binary number
of at most ~ 10^10^38.
Broadly speaking, the all time random number record
will have to lie below 10^^4, for
all the information our universe will ever create is
far smaller
Yet with mathematical means it's easy to build Bigger numbers.
That's what this article is about.
Knuth already made it so much easier for you with his uparrows.
And then comes Graham's Og, Conway's chained arrows,
bigE and bigI in the first row,
dimensions with countable separator commas,
subscriptable commas in bigI style, the Big type of functions,
countable Big function types in a cycle, operable cycles, higher cycles,
essential characters, metacharacters, enumerable metalevels, etc.
Mathematical methodology seems endless, but each time a new axiom comes into play
we have to be able to cope with that.
The human mind does have its limitations…
Also the number of welldefined axioms for any operatorial number system has
its history (as in the book) and therefore its physical limits,
and can't be standardized and automated (or can it?o)
A lot more water has to flow under the bridge
before we might be able to hint at a
practical mathematical limit for the largest number.
Logically there can be no limit,
every operatorial system is extendible and every format of extension
can be enumerated with a counter that is iterable in a larger system.
One thing we do know: working exclusively with integers,
or discrete units 1
and 
,
neither infinity nor the continuum of real numbers is reached.
The unit of infinity ω
is bigger than all natural numbers.
Within operatorial context the first infinity
ω_{1;}
becomes countable by the same methods we counted 1
.
Infinitely many next ω_{n}
were expressed by the Operatorial Ω(X)
.
Then we suggested the higher infinities ψ
and its subscript functionality Ψ
to lie beyond.
Greek letter counting was to be our next hobby, etc...
Infinities know no end.
The concept of 0=1 would be a candidate for the largest sort of infinity,
where it not that the system which produces it is at once in an axiomatic state of demise.
Perhaps physically the collapse of a Big or infinite system isn't that straightforward
– it may take a while before we can witness all of its constituent parts
fallen to the ground.
In the mean time 0=1
may sort of exist and even become extended, who knows?
Clever mathematicians may think of new ways to define new concepts.
The future is definitely uncertain.
But why should we make the effort to describe an extremely Big number
with mathematical rigour, when hardly anybody appreciates it?
A grandmaster chess player isn't necessarily best qualified
to explain youngsters how to play the game.
Worse than that – when our systems become too large,
we tend to lose oversight.
One day a mathematical computer may become so proficient at building operatorials,
that nobody understands the results any more.
It's here where the metaphysical definitions
come into their own right, whose only boundary is set by intelligent imagination.
This profoundly fantastic aspect makes them conspicuous,
but their lack of scientific basis is compensated by ease of understanding.
Man has always appreciated the hugeness of What can be
like this.
# St. Peter's Subway
We'll propose a metaphysical definition of
St. Peter's Subway as the Biggest number ever conceived by man.
It speculates on the potential growth of the axiomatic power (and what have you)
of mathematics – this thing that mathematicians do.
And it's description is not absolute, but recursive.
St. Peter's Subway starts off as a parable.
Adam, the first man, comes at the gate of heaven and is asked for the Biggest number.
He holds up his hand and St. Peter counts the fingers.
"Five is not much", he says, but following the rules
Adam is now given 5 new lives to come up with a higher number.
"Had he pointed at his missing rib, he'd been walking above the clouds
instead of below", St. Peter tells his boss, who shakes her head 5 times.
After his lives are over, Adam returns at the gate and is asked again,
"What is your Biggest number?"
Adam's last life had been in India, so now he knows decimal numbers,
and he starts writing out his solution on the patch of sand below St. Peter's feet.
It is a one followed by a hundred zeros.
"Googol
is not too many", St. Peter observes, but still Adam is awarded
a googol number of lives. God just shakes her head as St. Peter tells her,
"Had he drawn a plain circle he could have stepped through."
After a long time Adam returns and he looks almost a Buddha –
his head shaven, prayer beads moving round his fingers.
St. Peter stands firmly at the gate of heaven and asks him the same question,
concerning the Biggest number.
Thereupon Adam Buddha writes down a wonderful mathematical formula in the style of
bigI, which expresses a number far beyond human imagination.
"What's that?", St. Peter asks.
"It's an extension of the operatorial bigI", Adam explains,
"I found it on the internet".
"The internet?", St. Peter sighs.
"Yes, I was busy reaching enlightenment, you see,
I'm fed up with returning time and time again."
"I see…"
And so Adam has to spend an unimaginable aeon of lives down in the worldly realm.
God gets a little itchy as St. Peter tells her the news,
"Had he pointed to the lack of his empty head,
the man from nothing should have gone free".
Before Adam returns, St. Peter passes away,
as there is a limit to each man's years, even if that man is a divine being.
In turn St. Peter has to try NOT to enter the Greek hell,
known as Hades, which is just one big solid mist without a scream.
Outside of Hades the Hell Hound is waiting, and the animal barks a question
at the approaching Saint.
"What is the Biggest number of bones?"
As St. Peter repeats the last answer he has heard from Adam in heaven,
he feels lucky the Hound is kept on chains or else he would have been torn apart
by the sharp teeth of this ferocious Beast.
"Why?", growls the multiheaded dog named Kerberos,
"This time I will answer you!"
"If a host of mathematicians as huge as the number you just told me
were to search for the Biggest number and send an envoy to me after
their civilisation had perished to hand me their solution,
I would instantly raise a new mathematical army to the last given number
to search for a way to formulate an even Bigger number.
And I would continue sending out mathematical souls and receiving the envoy
until one would come back to me whose answer was about the same as last time.
Then I would decide to repeat this process of sending and receiving
an extra number of times as Big as the final answer,
just to make sure the solution remains in essence the same.
Only then would I accept that final number to be the Biggest number
conceivable in mathematics."
"It is that many of my dog boned lives that you, Peter,
must suffer the sulphur fumes in the metro system of Hades",
spoke the mythical Cerberus.
"Two questions before I go", said Peter, "How long does your dogship usually live,
and what if these mathematicians keep returning
with entirely different functions of numbers?"
"If so my former method would become the initial event of a successive process
to which the same rules of formation and termination and eventually succession apply",
refereed the Dog of dogs.
"And regarding the first question you've asked,
I live exactly for as long as you can stay, exactly."
"Wu!"
The Saint, a Dog, who can tell what follows next?
These days nobody stands at a loss of words, nor sits totally perplexed.
Such a painful process
0—0
the bitter end...endarkenment.
I beg you
—
a smaller question please
Σ Reference
Σ.1. Dictionary of Signs
Index of mathematical symbols – with a name and a short explanation. The name of the symbol links to a passage in the article which declares its use or elaborates on it.
0 
Zero
–
the empty natural number
reduced by removing the sign. 
1 
One
–
the number 1 or the unit 1
that composes the natural numbers 1.. 
2 
Two
–
the natural number 11 as a character, substitute with
X2Y {2:11} 
3 
Three
–
the natural number 111 as a character sign. 
4 
Four
–
the natural number 1111 as a character. 
5 
Five
–
number 11111 ,
etc. 6,7,8,9 –
but then 10 is an oldstyle ten. 
ω 
Omega
–
the infinite unit ω > 1..
counts the set of all natural numbers. 
 
Inverse unit
–
generates negative numbers *n and inverse functions. 
 
Minus
–
the oldstyleoperator of subtraction 110=9 is not countable. 
+ 
Binary plus
–
the oldstyleoperator of addition 1+9=10 is never counted. 
+ 
Unary plus
–
the left associative subscriptable operator
a+_{+..;}.. of bigA. 
* 
Star operator
–
single * for multiplication,
multiple **^{..} superpowers. 
^ 
Arrow operator
–
single ^ of exponentiation,
multiple ^^{..} superpowers. 
×  Abstract operator – to illustrate locally defined operations in an example. 
/ 
Division operator
–
for division a/b , roots a//b
and superroots ///^{..} 
% 
Remainders
–
supermodulo %^{..}
of the above superdivision operation. 
£ 
Logarithmic operator
–
for £^{..} superlogarithms,
e.g. 2££65536 = 4 
\  Backslash – insert a guest operation into a host series for more precision. 
E  Decimal power – scientific notation, example 6.8E9 ~ 6793605222 
†  Dagger – signifies that the previous calculation is cut short in bigO style. 
∞ 
Infinity
–
the utopia ↑∞
of a higher number or infinity not composed of units. 
↑  Limit up – approach a number from below, or lim n↑ω towards infinity. 
↓  Limit down – let a variable approach a limit from above, usually towards zero. 
←  Unchained arrow – countable operator to construct the operatorial bigY. 
→  Chained arrow – parameter separator in Conway's chained arrow notation. 
!  Factorial – unary operator of the superfactorial and fastorial functions. 
? 
Choice
–
random operator n?
picks an integer in the range from 0 to n 
$ 
Sequality
–
function $(R) returns 1
if all its arguments are equal, else 0 
x 
Mean
–
algorithmic mean value, the common average
x_{i;}../n {x_{i;}#n} 
, 
Separator
–
sign between two parameters or ,.. array dimensions. 
; 
Delimiter
–
marks the end of a row of type subscripts
a,_{0;}b,_{d,e;}c 
@  Generic char – standin for any sign possible in the expression context. 
.  Dots – select a subsequence in an expression. Use a brown . for decimals. 
R 
Row
–
row of parameters or subscripts of arbitrary length,
also S ,T . 
Κ 
Kappa
–
signal a set of functions of similar signature
{K.n.m} 
= 
Equality
–
in a=b expression a equals the b
that results from its reduction. 
≡  Equivalence – when two or more different forms of reduction seem possible. 
~  Approximation – estimate two numbers a~b as about equal in some context. 
≠ 
Not equal
–
comparison a≠b
where number a does not equal b 
< 
Less than
–
comparison a<b
where number a is smaller than b 
> 
Greater than
–
comparison a>b
where number a is larger than b 
≤ 
Less or equal
–
comparison a≤b
where number a is not larger than b 
≥ 
Greater or equal
–
comparison a≥b
where number a is not smaller than b 
& 
And
–
the logical expression a&b means
both a and b are true. 
 
Or
–
the logical expression ab means
either a or b or both are true. 
:  Substitution – replaces a variable by an expression with equal := value. 
#  Metaoperator – to repeat a sequence of signs within a preceding expression. 
♣ 
Clubs
–
shorthand for a dimension box ::
where value = sizes = dimensions. 
10  Radix – generic number base, counting with fingers is based on 10 ten. 
TB  Terabyte – data capacity of 2^40 byte = 2^48 bit priced at €50, 
:. 
Illipsis
–
or .:.
marks a sequence with a changing
metavariable. 
:: 
Dimension box
–
dual repetition of item a along
b sizes and b dimensions. 
()  Brackets – subdivide an expression or function for ordered evaluation. 
``  Minibrackets – in many evaluation trains one bracket sign will be sufficient. 
[]  Waitor brackets – special brackets for waitors complement minibrackets. 
{}  Set brackets – denote the class level of functions or a set of unique elements. 
{}  Metabrackets – contain metastatements that denote an expression. 
(_{n} 
Temporary brackets
–
are removed after an n number of reduction steps. 
0_{*}  Star space – the default sign context adds adjoined units and variables. 
0_{^}  Arrow space – the classic sign context that multiplies adjoined variables. 
=:  Double equal – reduction which does not apply to nested waitor expressions. 
>  Abstract step – a locally defined single step from a left to a right expression. 
=> 
Right implication
–
the left expression L=>R implies
or creates the right. 
<=  Left implication – the right expression implies or causes the left. 
<~  Predecessor – two algorithmically close expressions, the right comes before. 
~>  Successor – two expressions are algorithmically close, the right comes next. 
>>  Abstract iteration – multiple steps from the left to the right expression. 
==  Equal by iteration – iterating onwards the two expressions will be equal. 
<=>  Double implication – both expressions imply the other, so both are equal. 
The precedence rules for most signs are illustrated in
chapter 2.5.
The purpose of the four list markers for function definition lists
is discussed in
chapter 4.0.
Variables, wildcards, subscripts and colour conventions are explained in
chapter 1.1.
Mathematical constants can have their signs π ê î
capped to avoid conflict with our standard variables,
and to celebrate that they are generated by the exponential
^
functions.
# Standard variables
The letters of the alphabet are used as variable names that come in groups.

abcde
– operands or parameters at the left side of a function array. 
fgh
– coefficients added to a function or specific parameter types. 
ij íì
– incrementors addn
times up from1
or down fromn

lo
– Take care, lookalikes of 10 
kmnpq
– natural numbers,0
inclusive, e.g. to count indices. 
rst
– denote the size of a dimensional or higher construct. 
uvw
– typify substitutes in the evaluation of an operatorial. 
xyz
– the parameters at the right end of a function array.
Σ.2. Pedigree of Functions
Explains the context in which a signed function is constructed.
F 
Function
–
example function, as is G ,
enumerable with a coefficient F_{c} . 
Γ  Gamma – Euler's factorial function over the real and complex numbers. 
φ  Phi – lower recursive function, which is iterable by a higher function. 
ψ  Psi – higher recursive function, which iterates over a lower function. 
A( 
bigA
–
operatorial function based on the +_{R}
unary plus operator. 
E( 
bigE
–
operatorial function based on the
^_{R} arrow operator. 
I( 
bigI
–
operatorial function based on the
*_{R} star operator. 
O(  bigO – an umbrella function to express generic properties of operatorials. 
O[  Waitor bigO – inner expression, waiting for special evaluation rules to apply. 
U(  bigU – operatorial function derived from the factorial operator. 
Y(  bigY – operatorial function based on unchained arrows. 
Fa 
Special function
–
denoted by two or more characters, like
Grz or Ack_ψ' 
int  Integer part – discrete function that returns the integer part of a number. 
log 
Logarithm
–
inverse power log(10^m) = m
where radix 10 is
generic. 
BEAF  BEAF – Bowers' Exploding Array Function, a famous Big number algorithm. 
Σ.3. Glossary of Concepts
 Ackermann function

iterate over the enumeration of a family of primitive recursive
functions to make the first Ackermann jump on the
AckermannConway level
{K.2}
entering class{K.2.0}
of Ackermann functions.  Algorithm

a set of instructions to construct or reduce
an input sequence step by step and to eventually output a richer
construction or a simpler but larger sequence such as
1..
or to never halt.  Chained arrows

John H. Conway's recursive extension
a_{i}→..→y→z
of Superpowers, that covers the whole AckermannConway level of functions, and jumps beyond in the form of superchained arrows→→..
 Dimension box
 is a box or multidimensional array or discrete hypercube, where standard boxes are iteratively stashed inside standard boxes, until the number of dimensions and all dimension sizes are equal.
 Dominant rule
 rule in the definition of an operatorial function, dominating the evaluation of natural expressions thereof, because it contributes most to the size of the resulting numbers (function speed).
 Dual repetition

two repetitions of characters
AB....
{B..#m #n} that work in tandem to create a series of open repetitions, which is structurally comparable to exponentiation.  Enumeration
 externally keep count of some construct (such as a function) with a natural number coefficient. A so called predecessor was previously enumerated, next comes the successor.
 Feedback
 feeding the result of an expression back into one or more variables in the same formula. Higher recursions are usually first found by enumeration of iterated feedback.
 General recursion

any function or construct, which grows by substituting a predecessor
for a parameter or for an inner subordinate construct.
Usually only called
general
past the level of primitive recursion.  Graft on a stock
 when the left part of a function array (the parameter stock) is defined with different rules than the remaining function space (the expanding row shoot), the function is said to be grafted.
 Operatorial

array functions bigO that we construct following
the natural expansion of some operator.
All operatorials count down final iterators to z=0
and then apply
O(X,) = O(X)
as a generic axiom.  Power tower

an expression with powers
a^b_{i}... {^b_{i}#n n>1}
ofn1
storeys orn
exponents, evaluated from right to left. Small power towers can be used to express astronomical improbabilities.  Primitive recursion

the class of functions starting from addition of
1
and closed under iterated substitution of earlier primitive recursive functions for any number of parameters.  Structure
 a structure in its ground state is a pure construction, composed of smaller patterns, with characters devoid of arithmetical meaning. It can be filled and grafted on and interpreted on different levels.
 Superpowers

operations higher than exponentiation
a^b
that cover primitive recursion. Also known as superexponentiation. Specially the subclassesn
of multiple stars*^{..} {*#n1}
or arrows^^{..} {^#n}
 Upload

substitute the counted down value of higher iterators
(eventually y=1) for the accumulated value of a lower left parameter
(notably
b
). Rules for uploading give the biggest boost to our operatorials.  Waitor
 an inner expression within waitor brackets that waits in place after it is introduced, until its outer expression has been reduced maximally. Then the waitor is applied to restart the evaluation train.
Φ Big Family
Φ.2. Artwork
Our house photographer is Tess Jungblut.
Photography and politics weblog
Tessonome
(leave a message after the beep).
Under 2011 Construction
Comes a table with artist's info (copyright), reason to insert it in the book (context), and thumbnails zoomable to large pictures.
Φ.3. Commentary
Emails, visitor comments, answers and questions, FAQ, Guestbook.
* Note on: bigO*, generic operatorial function
O
The nomenclature is confusing here – the names "bigO" (bigOh) and
O()
and "bigOmega" and "Omega" are already in use as measures for algorithmic complexity or function speed. The primitive bigOh notion of algorithmic order just accounts for the fact that the size of the numbers an operatorial function produces is determined by the value of its highest positioned parameter.We feel our operatorials are entitled to the name bigO, because of the shared property that all parameters are put in order of algorithmic weight. This allows us to compare a given operatorial on a whole range of parameters. To decide which operatorial bigO increases faster we first check the lengths of the parameter rows, and then the values of the rightmost differing parameter.
We originally opted for the generic function prefix
O()
because our operatorials are constructed on top of a system of operators. Also, a numerical property of our operatorialsO(X,z)
is that final iteratorsz
are count down to0
(zero) and then dropped.Related Big number functions such as Conway's chained arrows may be called operator functions.