FGH_e0

4.2.4

The Fast Growing Hierarchy below Epsilon-naught

4.2.3 <<PREV

FGH up to Epsilon-naught

Next we are going to try to extend FGH to an important milestone: epsilon-naught. It's important for a few reasons. One reason it's important is that this is the most commonly used interval of FGH that is actually described. You can do a lot of mathematics even just up to epsilon-naught, so this is sufficient for most purposes. FGH has of coarse been extended far far beyond this first major milestone, but epsilon-naught will be a good introduction into the main methodology of FGH. It is also a common bench mark for several fast growing functions and notations. Tetrational arrays are at this level, as is my own E^ (Cascading-E Notation). Goodstein-sequences also grow at about this level. Another reason why it seems kind of important is that it marks the end of a natural development of ordinals using ordinal addition, multiplication, and exponentiation. It's the first of Cantor's epsilon-numbers marking Cantor's first impasse in his development of ordinal notation. This break down also occurs in BEAF interestingly. There is no official notation for BEAF past tetrational arrays (other than using the "array of" operator). Lastly, getting up to epsilon-naught, while really really powerful, is a relatively easy and instructive milestone. The difficultly begins to grow more rapidly after this point.

In later articles we'll look into FGH much further than epsilon-naught (eventually I'm going develop a whole series of articles which will lead up into advanced FGH. This will be important for the comparisons with BEAF and other alternative advanced notations).

For now, let's delve right into developing the epsilon-naught set. We begin where we left off with ω^ω. We can of coarse continue with ω^ω+1, ω^ω+2,etc. In fact, we can simply include ω^ω as a "new power" of "ω" to all the previous powers. So we can take any member of ω^ω, call it "a" and have ω^ω+a. Including all ω^ω+a produces a new set, with supremum 2ω^ω. We can then create indexes 2ω^ω+a, which has supremum 3ω^ω, etc. So what's the supremum for all bω^ω+a where "b" is a positive integer an "a" is an element of ω^ω? Well we can think of this as "ω*ω^ω", which we can write as "ω^(ω+1)". This opens up a whole venue of possibilities. For we can now let the exponent itself be an ordinal ... any ordinal which we've already constructed. So we'll have ω^(ω+2), ω^(ω+3) , ... and eventually we would reach ω^(2ω), then ω^(2ω+1), ω^(2ω+2), ... eventually we reach ω^(3ω), ω^(4ω),ω^(5ω), etc. ... then ω^ω^2, ω^ω^3, ... ω^ω^ω, ... ω^ω^ω^ω , ω^ω^ω^ω^ω ... and ... that's it! We've reached the limit of extending such a principle. This forms a massive set of ordinals we can call the epsilon-naught set. This is usually denoted ε0. ε0 is also an ordinal, and it's the smallest ordinal greater than any ordinal formed using "ω" , addition, multiplication, and exponentiation. Cantor was the first to discuss this ordinal when he introduced ordinal arithmetic. The use of epsilon is probably due to the fact that "epsilon" is analogous to "e" for "exponent".

Although there is some refinement to do, we more or less have an idea what this set contains. The more tricky part is defining the fundamental sequences.

This can however be achieved quite easily using a few basic principles. First off, if "a" and "b" are ordinals and their ordinal sum is " a+b " the fundamental sequence of the sum can be defined as:

(a+b)[n] = a+b[n]

This means that only the last "term" in any ordinal expression is actually affected by the fundamental sequence. We saw this earlier. For example:

2+ω)[n] = ω2+ω[n] = ω2+n

We can define the epsilon-naught set so that each ordinal is expressed in the basic form:

a0*ω^(x0) + a1*ω^(x1) + ... +an*ω^(xn)

Where a0~an are positive integers (if they were zero the term would just drop out), and x0~xn are ordinals of the ε0 set. This definition is easy to explain, and doesn't require many special rules. The downside is that we get many rogue-types. What is a rogue-type? A rogue-type is a non-standard expression for an ordinal. There are in general multiple ways to express ordinals. For example ω^2 is also ω*ω. This is fine as long as the various expressions can be mapped to a single canonical form. The other forms are then simply treated as synonyms for the canonical form, and they can be converted into the canonical form for the purposes of mechanical evaluation (the carrying out of the algorithm). Such mappings are not guaranteed however, and must be properly defined. The other option is simply to disallow synonyms and have only a canonical form for every ordinal in a given set. When professional mathematicians define FGH this is the approach they typically take. The problem with this approach is that it requires additional rules, not included in the evaluation rules, that specify what kinds of forms are legal. This is information I usually include in the header of a function definition.

Traditionally rogue-types are resolved by defining ordinal arithmetic where-by each non-canonical form simply gets converted into a canonical form. For example, 1+ω = ω according to the traditional rules of ordinal arithmetic, where as ω+1 =! ω. In terms of FGH this means you'll never use 1+ω, and there is no f_1+ω.

Here I'll offer an alternative approach which has some interesting theoretical implications. It is not necessary to treat every ordinal expression as simply a synonym for an ordinal along the main sequence. Rather we can treat 1+ω as distinct from either ω or ω+1. The result is a proliferation of possible indexes, most of which are rogue-types. The advantage is that we can define fundamental sequences which still define a main sequence which will function exactly as if we had excluded all rogue-types, but that we no longer need to be as strict in defining a canonical set of indexes. This means a reduction in the amount of information to describe the system (an advantage really, as we want to express large numbers efficiently. Remember the second law of googology : elegance over effusion. All salad numbers violate this principle, especially when the descriptions become quite lengthy).

Technically this will mean an expansion of the epsilon-naught set to include many things it doesn't normally. These will define unique functions not found in the conventional FGH. However, these functions will not exhibit any new growth rates. Although they won't be identical to any conventional FGH functions, they will have an equivalent growth rate.

How does this work? Consider the function f_1+ω. According to our definition this is a legal form in our ε0 set, since it can be expressed as 1*ω^0 + 1*ω^1. According to our addition rule for fundamental sequences, we have:

(n)[1+ω] = (n)[(1+ω)[n]] = (n)[1+(ω[n])] = (n)[1+n]

Some interesting things to note here. " 1 + n " will simply be a number 1 greater than n, and so the function f_1+ω won't grow much faster than f_ω, and in fact it has an equivalent growth rate, since f_ω can over take f_1+ω with a simple advantage of +1. Recall that 1+ω=ω according to the usual rules of ordinal arithmetic. Rogue-types follow this principle implicitly, but rather than f_1+ω = f_w we get that f_1+ω [=] f_ω (here [=] means growth rate equivalence. This is a subject which will be explored in detail in 3.3 in future articles).

Note how easily the rules for canonical forms carries over to rogue-types. This also makes the fundamental sequences easier to define.

With these concepts in mind I now present my formal definition of FGH_ε0:

def. FGH_ε0

Let (n)[α] be defined whenever n={0,1,2,3,...} and α is a member of the ε0 Set as follows:

1. If α is in Zero Set; (n)[0] = n+1

2. If α is a in Successor set ; (n)[α] = (n)n[α-1]

3. (n)[α] = (n)[α[n]]

def. ε0 Set

1. Every non-negative integer is a member of ε0

2. If α is a member of ε0 and n={0,1,2,3,...} then " nωα " is a member of ε0.

3. If α and β are members of ε0 then " α+β " is in ε0.

def. Zero Set

1. Zero is in the Zero Subset.

2. " 0ωα " where α is a member of ε0 is in the Zero Subset.

3. If α and β are members of the Zero Subset then " α+β " is in the Zero Set.

def. Successor Set

A member of ε0 is also a member of the Successor Subset, if an only if it can be expressed in the form:

1. n

2. α+n

3. α+nω0

where α is a member of ε0 and n={1,2,3,...}.

def. Successor Ordinal -1

If α is a successor ordinal (a member of the successor subset) then α-1 is defined as follows:

1. If α={1,2,3,4,...} then α-1={0,1,2,3,...}.

2. If α={β+1,β+2,β+3,...} where β is a member of ε0 then α-1={β,β+1,β+2,...}.

3. If α={β+1ω0,β+2ω0,β+3ω0,...} where β is a member of ε0 then α-1={β,β+1,β+2,...}.

Synonyms

ω^0 is a synonym for 1, and ω^1 is a synonym for ω

Def. Fundamental Sequences

Let α+kw^β be a member of ε0 where α and β are members of ε0 and "k" is a positive integer.

1. If β=0 then kw^0 = k and the ordinal is a successor ordinal , rule 2 of FGH applies, and no fundamental sequence is defined.

2. β is a successor ordinal apply the following:

(α+kw^β)[n] = α+(k-1)w^β+nw^(β-1)

3. If β is a limit ordinal then:

(α+kw^β)[n] = α+kw^(β[n])

That's it! It is now possible to define a whole range of functions within FGH, including many rogue-types. Interestingly, not all rogue-types act that strangely. For example f_ω+ω is identical to f_2ω, since according to the rules:

(n)[2ω] = (n)[2ω[n]] = (n)[1ω+nω^0] = (n)[ω+n]

yet we also find that:

(n)[ω+ω] = (n)[ω+ω[n]] = (n)[ω+0ω+nω^0] = (n)[ω+n]

In general we can still use ordinal arithmetic to make comparisons of rogue-type functions to canonical functions. For example we can say:

(n)[ω+ω2] ~ (n)[ω2]

As you can see, it is not in the least necessary to consider "ω" as an infinite number. Rather it's an empty container which can be filled with any finite number. It's better understood as a potential infinity than an actual infinity. This is why ω+ω2 is approximately ω2 and not exactly ω2. It's because it depends on the actual numbers being substituted for "ω".

Some Examples and Comparisons

Already this notation is more than sufficient for most numerical purposes even in mathematics except the most esoteric (like approximating TREE(3) for example). Stuff like Graham's Number are already far far behind and easily described even within subsystem FGH_ω^ω.

Let's just consider a few examples of how this notation works. Consider f_ω^ω. How powerful is this function? Let's see:

(0)[ω^ω] = (0)[ω^0] = (0)[1] = 0

(1)[ω^ω] = (1)[ω^ω[1]] = (1)[ω^1] = (1)[ω] = (1)[ω[1]] = (1)[1] = 2

(2)[ω^ω] = (2)[ω^ω[2]] = (2)[ω^2] = (2)[2ω] = (2)[ω+2] = (2)2[ω+1] =((2)2[ω])[ω+1] =

(((2)[2])[ω])[ω+1] = ((8)[ω])[ω+1] = ((8)[8])[ω+1] ~ (<2,8,7>)[ω+1] ~ <2,8,<2,8,7>> (larger than G2)

(3)[ω^ω] = (3)[ω3] = (3)[3ω2] = (3)[2ω2+3ω] = (3)[2ω2+2ω+3] (much larger than expressions in Chain arrows)

(4)[ω^ω] = (4)[ω4] = (4)[4ω3] (comparable to 6 entry BEAF arrays)

(5)[ω^ω] = (5)[ω5] = (5)[5ω4] (comparable to 7 entry BEAF arrays)

(6)[ω^ω] = (6)[ω6] = (6)[6ω5] (comparable to 8 entry BEAF arrays)

etc.

These numbers are already tremendous, but we have just begun with FGH_ε0. Even the power of the successor index is vastly beyond this...

(0)[ω^ω+1] = 0

(1)[ω^ω+1] = (1)[ω^ω] = 2

(2)[ω^ω+1] = (2)2[ω^ω] = ((2)[ω^ω])[ω^ω] = ((2)[ω2])[ω^ω] >> (G2)[ω^ω] = (G2)[ωG(2)]

o_0; That's G2 the number <3,3,<3,3,4>> as the argument in ω^n! Imagine an array with G2 entries! That's what we're talking about!

(3)[ω^ω+1] = (3)3[ω^ω] = ((3)[3ω2])2[ω^ω] > ((3)[ω(3)[3ω^2]])[ω^ω] >

(3)[ωX] where X = (3)[ω(3)[3ω^2]]

Try to wrap your brain around this. (3)[3ω2] is a number already inexpressible using Conway's Chain arrow notation. Now imagine ω to the power of (3)[3ω2]. That defines a massive number way way beyond description. Now have ω to THAT POWER! And this is just the power of ω^ω+1! This function is not just a little faster than f_ω^ω. It's feeding back the fundamental sequence ω^ω[n] = ω^n back into itself recursively. In a very interesting and circular way, ordinals are being used to define larger numbers which are then being used to define larger ordinals to define larger numbers, etc. Remember every time we take the successor ordinal we are applying primitive recursion to the previous construction. That is already a powerful concept, yet it's just the incremental step in FGH. This is why, if your given the function (n)[ω^ω], and you respond with

(googolplex)[ω^ω], or even (Graham's Number)[ω^ω], you'll still get clobbered with a mere (3)[ω^ω+1]. FGH completely makes obsolete any attempt to use large arguments ... even large arguments which are themselves generated by FGH. For example ((3)[ω^ω+1])[ω^ω+1], is still smaller than (3)[ω^ω+2]. FGH automates these obvious extensions. In googology the taking of pre-existing functions and mashing them together with pre-existing numbers is known as a salad number. Examples of this abound on the internet. However, all salad numbers are simply naive extensions. They do not add in the least to the complexity level of the numbers being extended! The key then is not loading the argument of FGH. Even a timid number like 2 or 3 is enough to transcend some previous class of numbers. The key is rather to devise larger and larger subsets of FGH. Even (n)[ω^ω+ω] is already applying arbitrary primitive recursions to ω^ω. By the time we get to (n)[ω^ω+ω^ω] we have totally left things like linear arrays, far far behind ... and yet this is still no bigger than around <b,p (1) 3>! And by the 1st law of googology: the madness only gets worse. (n)[ω^(ω+1)] gives us ω^ω recursion stacked unto itself repeatedly. Example:

(3)[ω^(ω+1)] = (3)[ω^ω+ω^ω+ω^ω]

So ω^(ω+1) marks a new kind of recursive level. This itself can be stacked unto itself in the same way so that we get ... ω^(ω+2) ... then again to get ... ω^(ω+3) ... ω^(ω+4) ... ω^(ω+5) ... beyond all this lies ω^(2ω). This get's us up to ... just two rows in BEAF! Likewise ω^(3ω) get's us up to 3 rows, ω^(4ω) to 4 rows, etc. Beyond all this is ω^ω^2 where we reach planar arrays. Cubic arrays are ω^ω^3, quartic arrays are ω^ω^4, quintic arrays are ω^ω^5 ... and beyond all this is dimensional arrays of the order ω^ω^ω. We can go on to get super dimensional arrays, trimentional arrays, etc. and this is equivalent to growing the exponential stack of ω's. Finally beyond everything in the ε0 set we reach ε0 which is of the same order as tetrational arrays.

For ExE the comparisons run in a similarly well-ordered fashion. Each time I introduce a new kind of delimiter, the power of ExE increases by a single power of ω. For example E# which only uses "#" has order type ω. xE# restricted to # and ## delimiters has order type ω2, the same as Conway Chain arrows. xE# which has delimiter set {#,##,###,####,...} has order type ωω, the same as linear BEAF arrays. Let ExE_$ where $ is a delimiter be the subsystem of ExE containing only delimiters below delimiter $. Thus we can say ExE_#^# has order type ω^ω, ExE_#^#^# has order type ω^ω^ω, ExE_#^#^#^# has order type ω^ω^ω^ω, etc. and ExE_#^^# has order type ε0. Basically we find that despite the very different architecture, all googological functions essentially develop the same fundamental hierarchy. Different algorithms may be used, and different numbers may result, but they do not significantly out class each other. The reason for this is that they all essentially diagonalise over some base function, combined with some variant on primitive recursion. For BEAF and ExE, the fundamental recursion allows for the specifying of a fixed base value which is then iterated a specified number of times. In BEAF this fundamental recursion is accomplished by <b,p,...>, whereas in ExE it's accomplished by @m$n. For FGH the fundamental recursion is carried out by "n" alone, where it is substituted as both the base value and the iteration value. At first it seems that this would lead to a "faster rate of growth". In a fixed sense it does, but this notation doesn't prove that useful in googology. It's better to think of functions having the ability to slide along their sequence. One sequence is only truly faster than another if, no matter how much you slide the slower function against the faster one, the faster one eventually catches up and eventually way surpasses the slower function, usually very quickly and in a spectacular fashion. It turns out that the recursion that FGH is built on will give it a very slight advantage over a function built of fixed-base recursions at the same level of recursions, but that if the fixed-base recursions are of even one order higher than they beat FGH. So, it turns out it makes little difference. The only real thing that really weighs in on the size of the number, is the amount of recursion you've packed into it. That is where the real power lies ... but also where the real challenge lies ... just how much recursion can you describe? For the googologist there's only one proper way to go about answering that question ... to try...

Conclusion

FGH is already on par with E^. Is this the end of FGH? No. Although this is the most commonly described subsystem of FGH, FGH can and has been extended much further. The next major milestone is a mysterious ordinal known as gamma-naught. This ordinal, like epsilon-naught, is an important waystation. It also serves as one of the weak lower bounds on the TREE(k) function. To get to it however we'll have to go into Veblen functions. These are special ordinal functions which can be used to systemically continue where Cantor left off. Before we get to that however, it might be instructive to first develop a smaller subsystem which uses Cantor's theory of epsilon numbers ...

NEXT>> 4.2.5