FGH_w

4.2.2

The Fast Growing Hierarchy below ordinal ω

4.2.1 <<PREV

The Primitive Recursive Functions of FGH

The fast growing hierarchy begins simply enough. It begins with the simplest strictly increasing,everywhere abundant natural-to-natural function, the Successor function, commonly denoted S(n).

Firstly let's establish the natural numbers. They can be defined simply by these two statements:

1. 0 (zero) is a natural number

2. The successor of a natural number is a natural number

The implication here, is that the natural numbers are infinite and constitute the set:

{0,1,2,3,4,5,...}

However, it is not necessary for us to conclude that they are an infinite set. It enough simply to say this defines anything we might call a natural number and leave it at that. Any Number which is Natural should ultimately be reducible to a finite number of applications of these 2 statements.

A strictly increasing function, is one such that:

f(m) < f(n) : m < n

The slowest possible function with this property is the return function:

R(n) = n

An everywhere abundant function, is one such that:

n < f(n) : An

In other words, a function such that the output is greater than the input, for ALL inputs. The simplest function with both these properties is the successor function defined as:

S(n) = the successor of n

Basically S(n) = n+1, though since the Successor is a fundamental function used in the definition of all other recursive functions, including addition, it should not itself be defined using other functions. The simplest way to explain it to someone is that the successor function returns the next number in the sequence of natural numbers. So for example S(3) is equivalent to asking "what comes after 3", answer: 4. Thus S(3) = 4.

The standard notation for FGH for the Successor is:

f0(n) := S(n)

That is, it's the "zeroth function" of the fast growing hierarchy. The choice of the successor for the "base case" is not arbitrary. It's based on peano arithmetic, where the Successor function is recognized as the fundamental operation.

From here on out we create new functions from old functions using two basic techniques: recursion and diagonalization. We will be using recursion in this article, but we won't start using diagonalization until the next article. Define a new function f1(n) such that:

f1(n) = f0(f0(...(f0(f0(n))...)) w/n f0s

In otherwords, embed n inside "n" f0 functions. This is a form of "recursion". Using a functional power we can truncate this statement to:

f1(n) = f0 n(n)

We can quickly show that f1(n) = 2n. Since each application of f0 adds 1 to the input, we find that:

f0(n) = n+1

f0(f0(n)) = f0(n+1) = n+1+1 = n+2

f0(f0(f0(n))) = f0(n+2) = n+3

...

and therefore

...

f1(n) = f0 n(n) = n+n = 2n

This may not seem like a very fast growing function, but it won't take many steps for us to reach some really fast growing functions. In fact we'll quickly surpass most of the stuff from Section III.

Now any time we have some function fa(n), we can create a new function as follows:

Let:

fa+1(n) = fa n(n)

Each application of this rule we'll call a single application of "base-recursion".

From f1 we now generate f2:

f2(n) = f1 n(n)

This produces a function which exhibits exponential growth. This isn't difficult to show. Just consider that f1(n) = 2n. So every application of f1 doubles the input. Thus we get:

f1(n) = 2n

f1(f1(n)) = f1(2n) = 2*2n = 4n = (2^2)n

f1(f1(f1(n))) = f1(4n) = 2*4n = 8n = (2^3)n

...

therefore

...

f2(n) = f1 n(n) = n*2n

This produces exponential numbers. Here is a table with some of the lower values:

Even by ordinary day-to-day standards these aren't all that impressively large. However it doesn't take long for these numbers to get astronomically large. For exampe f2(20) = 20,971,520. That's twenty million. By the time we hit f2(100) we get 126,765,060,022,822,940,149,670,320,537,600. A 33-digit number. That's decently astronomical.

Still we've seen things of this magnitude before way back in Section II. But remember, even though the sequences start out slow they will quickly build up steam. Let's try the third application of recursion to f0:

f3(n) = f2 n(n)

It is at this point that the ordinary notations break down and we don't have an easy way to express this function explicitly instead of recursively. However we can come up with a good lower bound. Simply observe that:

f2(0) = 0*20 = 0*1 = 0

f2(n) = n*2n ≥ 2n : n>0

We can then say that:

f2(f2(n)) ≥ f2(2n) ≥ 2^2^n : n>0

So in general we have:

f3(n) = f2 n(n) ≥ 2^2^...^2^n w/n 2's = E[2]n#n ≥ E[2]1#n = 2^^n : n>0

:: f3(n) ≥ 2^^n : n>0

Here are some approximate values for very small inputs:

(*See Hyper-E article)

f3(3) is a very large number (roughly 10^10^8), and f3(4) is even more insane (roughly 10^10^10^21). Here we have that f3(n) exhibits roughly tetrational growth ( in fact it's slightly faster and will eventually overtake any tetrational function, though somewhat slowly).

Applying yet another recursion, we would get that f4(n) is roughly pentational growth, f5(n) is roughly hexational growth, etc.

In fact it isn't difficult to show that:

f3(n) ≥ 2^^n

f4(n) ≥ 2^^^n

f5(n) ≥ 2^^^^n

etc.

By allowing for any number of applications of base-recursion we get a hierarchy of primitive recursive functions. Adding 1 to the index is roughly equivalent to going up one up-arrow, or adding another argument in Hyper-E, or going up one primitive recursion.

Now that's pretty impressive by ordinary standards, especially considering how easy it is to obtain this result. It only requires two simple rules:

Let fm(n) be defined when m,n are non-negative integers...

1. If m=0 ; f0(n) = n+1

2. otherwise ... fm(n) = fm-1 n(n)

The notation fα(n) is standard in the professional literature for members of the fast growing hierarchy. However I find the notation to be rather generic and unappealing. I will therefore adopt the following alternative throughout the rest of the articles on FGH:

(n)[α] = fα(n)

(n)k[α] = fα k(n)

One nice feature of this notation is that it emphasizes the fact that we can treat FGH as a binary function which takes a natural number and an ordinal as input.

Subsystem FGH_ω

Given the above definitions we can treat FGH up to this point as a binary function, (n)[m], which takes a pair of non-negative integers and returns a single non-negative integer. Alternatively it can be viewed as a series of functions: (n)[0], (n)[1], (n)[2], etc. Collectively this set of function comprises the FGH_ω subsystem.

FGH will stand for the general system of the fast growing hierarchy. When we want to specify a subsystem of FGH we will index it with the weakest FGH function not contained in the subsystem. For example, to speak of the subsystem only containing (n)[0] we would call this subsystem FGH_1, because (n)[1] is the smallest FGH function not contained in subsystem FGH_1. To speak of the subsystem containing (n)[0] and (n)[1] we would call this subsystem FGH_2.

By this reasoning, no matter what positive integer we use FGH_# will not contain all of the functions contained in the subsystem containing every function (n)[#] where # is a non-negative integer. For this reason we need a new index which is distinct from every positive integer. For largely incidental reasons of history we can arbitrarily chose the symbol "ω" (the lower case greek letter omega) to represent the smallest level beyond all the positive integer indexed functions of FGH. Hence why it is called subsystem FGH_ω.

Note that subsystem FGH_ω does not comprise the entirety of FGH. It is in fact only a very tiny part of it! But what could we place in the second argument of (n)[m] other than a non-negative integer? We'll find out in the next article. Before that however let's get a little practice with handling subsystem FGH_ω.

Some worked out examples for FGH_ω

The 0th function of FGH (which we'll denote [0] ) is trivial to "compute":

(0)[0] = 1

(1)[0] = 2

(2)[0] = 3

(3)[0] = 4

(4)[0] = 5

etc.

function [1] simply doubles the argument, as we proved previously. However I'll follow the expansion rule to show how it actually works:

(0)[1] = (0)0[0] = 0

(1)[1] = (1)1[0] = 2

(2)[1] = (2)2[0] = (3)1[0] = 4

(3)[1] = (3)3[0] = (4)2[0] = (5)1[0] = 6

(4)[1] = (4)4[0] = (5)3[0] = (6)2[0] = (7)1[0] = 8

(5)[1] = 2*5 = 10

(6)[1] = 2*6 = 12

(7)[1] = 2*7 = 14

(8)[1] = 2*8 = 16

(9)[1] = 2*9 = 18

(10)[1] = 2*10 = 20

...

(99)[1] = 2*99 = 198

(100)[1] = 2*100 = 200

etc.

function [2] exhibits roughly exponential growth. Remember that (n)[2] = n*2^n. Again I'll work out the first few examples before simply using the explicit formula:

(0)[2] = (0)0[1] = 0

(1)[2] = (1)1[1] = 2*1 = 2

(2)[2] = (2)2[1] = (2*2)1[1] = (4)1[1] = 2*4 = 8

(3)[2] = (3)3[1] = (2*3)2[1] = (6)2[1] = (2*6)1[1] = (12)1[1] = 2*12 = 24

(4)[2] = (4)4[1] = (8)3[1] = (16)2[1] = (32)[1] = 64

(5)[2] = (5)5[1] = (10)4[1] = (20)3[1] = (40)2[1] = (80)[1] = 160

(6)[2] = (6)6[1] = 6*2^6 = 6*64 = 384

(7)[2] = (7)7[1] = 7*2^7 = 7*128 = 896

(8)[2] = (8)8[1] = 8*2^8 = 8*256 = 2048

(9)[2] = (9)9[1] = 9*2^9 = 9*512 = 4608

(10)[2] = (10)10[1] = 10*2^10 = 10*1024 = 10,240

(11)[2] = (11)11[1] = 11*2^11 = 22,528

(12)[2] = (12)12[1] = 12*2^12 = 22,528

(13)[2] = (13)13[1] = 13*2^13 = 106,496

(14)[2] = (14)14[1] = 14*2^14 = 229,376

(15)[2] = (15)15[1] = 15*2^15 = 491,520

(16)[2] = (16)16[1] = 16*2^16 = 1,048,576

(17)[2] = (17)17[1] = 17*2^17 = 2,228,224

(18)[2] = (18)18[1] = 18*2^18 = 4,718,592

(19)[2] = (19)19[1] = 19*2^19 = 9,961,472

(20)[2] = (20)20[1] = 20*2^20 = 20,971,520

...

(30)[2] = 32,212,254,720

...

(50)[2] = 56,294,995,342,131,200

...

(99)[2] = 62,748,704,711,297,355,374,086,808,666,112

(100)[2] = 126,765,060,022,822,940,149,670,320,537,600

etc.

function [3] grows roughly tetrationally and has no closed form. At this point we need to iterate function [2] to get the results. Some estimation will be needed here in order to get a rough measure of size:

(0)[3] = (0)0[2] = 0

(1)[3] = (1)[2] = 1*2^1 = 2

(2)[3] = (2)2[2] = (2*2^2)[2] = (8)[2] = 8*2^8 = 2048

(3)[3] = (3)3[2] = (3*2^3)2[2] = (24)2[2] = (24*2^24)[2] = (402,653,184)[2] =

402,653,184*2402,653,184 ~ 10120,000,000

(4)[3] = (4)4[2] = (4*2^4)3[2] = (64)3[2] = (64*2^64)2[2] = (270)2[2] = (270*22^70)[2]

~ (2^2^70)[2] = 2^2^70 * 2^2^2^70 ~ 10^10^10^21

(5)[3] = (5)5[2] = (5*2^5)4[2] = (160)4[2] = (160*2^160)3[2] ~ (2^167)3[2] =

(2^167*2^2^167)2[2] ~ (2^2^167)2[2] ~ (2^2^2^167)[2] ~ 2^2^2^2^167 ~ 10^10^10^10^50 = E50#4

(6)[3] = (6)6[2] = (384)5[2] = (384*2^384)4[2] ~ (2^392)4[2] ~ (10^118)4[2] ~

(10^10^118)3[2] ~ (10^10^10^118)2[2] = (E118#3)2[2] ~ (E118#4)[2] ~ E118#5

(7)[3] = (7)7[2] = (896)6[2] = (896*2^896)5[2] ~ (10^272)5[2] ~ E272#6

(8)[3] = (8)8[2] = (2048)7[2] ~ (2048*2^2048)6[2] ~ (10^619)6[2] ~ E619#7

(9)[3] = (9)9[2] = (4608)8[2] ~ (10^1390)7[2] ~ E1390#8

(10)[3] = (10)10[2] = (10,240)9[2] ~ (10^3086)8[2] ~ E3086#9

...

(100)[3] = (100)100[2] ~ (1.2676xE32)99[2] ~ E32#100

etc.

So it turns out that (100)[3] lies somewhere between a giggol (E1#100) and a grangol (E100#100). In fact it's even larger than a fzgiggol (That's giggol^giggol. Approx. E11#100). Note that as the argument of (n)[3] increments the 2nd argument in E# increments, but the 1st argument grows exponentially. Because of this (n)[3] is actually growing "slightly" faster than tetrational. That means that (n)[3] eventually beats out any tetrational function b^^(n+c). This is because (n)[3] very slowly gains extra height in the stack due to the leading exponent growing exponentially. This means in linear time it gains one more terms on the stack, in exponential time it gains two more terms on the stack, in hyper-exponential time it gains three more terms on the stack, etc. The effect of this is that (n)[3] is always gaining ground on b^^(n+c), does so slower and slower, but always manages to eventually overtake it. Thus (n)[3] represents a growth rate somewhere between tetrational and hyper-tetrational growth. It is still possible to upper-bound (n)[3] by En#n. Thus it can be easily proven that (100)[3] < E100#100 = grangol. This further implies that (100,000)[3] < E100,000#100,000 = grangolgong.

Now let's look at the power of (n)[4]. We don't have a closed form, so we are forced to use iterations of (n)[3], which we also can't compute directly. Therefore we will want to have an approximation of the general value of (n)[3] first.

Since n*2^n < 2^n*2^n = 4^n < 10^n it follows that:

(n)[3] = (n)n[2] < (10^n)n-1[2] < En#n

Now let's estimate some values for (n)[4]:

(0)[4] = (0)0[3] = 0

(1)[4] = (1)[3] = 2

(2)[4] = (2)2[3] = (2048)[3] < E2048#2048 < E1#2050 = 10^^2050

(3)[4] = (3)3[3] ~ (E8#2)2[3] < (E(E8#2)#(E8#2))[3] ~ (E8#2#2)[3] ~ E8#2#3

(4)[4] = (4)4[3] ~ (E21#3)3[3] ~ (E21#3#2)2[3] ~ (E21#3#3)[3] ~ E21#3#4

(5)[4] = (5)5[3] ~ (E50#4)4[3] ~ (E50#4#2)3[3] ~ E50#4#5

(6)[4] = (6)6[3] ~ (E118#5)5[3] ~ E118#5#6

(7)[4] = (7)7[3] ~ (E272#6)6[3] ~ E272#6#7

(8)[4] = (8)8[3] ~ (E619#7)7[3] ~ E619#7#8

(9)[4] = (9)9[3] ~ (E1390#8)8[3] ~ E1390#8#9

(10)[4] = (10)10[3] ~ (E3086#9)9[3] ~ E3086#9#10

...

(100)[4] = (100)100[3] ~ (E32#100)99[3] ~ E32#100#100

etc.

We can see that (100)[4] is just below a greagol, though greater than a gaggol. We can also see that (n)[4] roughly outputs 10^^^n, just as (n)[3] roughly outputs 10^^n.

Let's make some estimates for (n)[5]:

(0)[5] = (0)0[4] = 0

(1)[5] = (1)[4] = 2

(2)[5] = (2)2[4] = (10^^2050)[4] ~ E1#1#(10^^2050) < E1#1#(10^^^3)

= E1#1#(E1#1#3) = E1#1#3#2

(3)[5] = (3)3[4] ~ (E8#2#3)2[4] ~ (E8#2#3#2)[4] ~ E8#2#3#3

(4)[5] = (4)4[4] ~ (E21#3#4)3[4] ~ E21#3#4#4

(5)[5] = (5)5[4] ~ (E50#4#5)4[4] ~ E50#4#5#5

(6)[5] ~ E118#5#6#6

(7)[5] ~ E272#6#7#7

(8)[5] ~ E619#7#8#8

(9)[5] ~ E1390#8#9#9

(10)[5] ~ E3086#9#10#10

...

(100)[5] ~ E32#100#100#100

etc.

So (100)[5] is less than a gigangol but more than a geegol. Hopefully your beginning to notice a pattern already. It can be seen that roughly speaking:

(n)[2] ~ En

(n)[3] ~ En#n

(n)[4] ~ En#n#n

(n)[5] ~ En#n#n#n

...

(n)[m] ~ En##(m-1) : m>1

We can say that FGH_w is a system of equivalent strength to the primitive recursive functions. In more precise terms this means that for every member of the primitive recursive functions, there is a member of FGH_w which grows just as fast or faster. Likewise for every member of FGH_w there is a primitive recursive function which grows just as fast or faster. Thus neither can claim to be a more powerful system than the other and they are therefore "equivalent" in some sense. Other systems which are strength equivalent to FGH_w are Hyper-E Notation, Knuth up-arrow notation, 3-argument BEAF arrays, Steinhaus Moser polygons, etc.

Conclusion

So we now have a series of unary functions (or a single binary function) with the power of up-arrows. However we already went well past up-arrows in previous articles. Is this all FGH has to offer? Not by a long shot! Let's now look at the first diagonalization...

NEXT>> 4.2.3