conway

3.2.6

John Horton Conway's Chained Arrow Notation

Introduction

John Horton Conway is a world renowned mathematician whose fields of interest include group theory, knot theory, number theory, and game theory, among others. He is also known for dabbling in recreational mathematics, having created the game of life, the surreal numbers, the doomsday algorithm, and philosopher's football. He has written, in collaboration with a number of other notable mathematicians, many popular books on amateur mathematics. Born in 1937, Conway still lives and works at Princeton University.

I had the honor of meeting him at a small event at my college during the spring 2012 semester. He gave a casual lecture on the surreal numbers and their connection to game theory. Having met him in person and even spoken with him briefly, the thing that struck me about Conway is that he straddles the border between professional mathematics and the more recreational kind that often attracts the laymen. During the lecture Conway admitted that he considers the surreal numbers to be his greatest discovery, and also his greatest disappointment, because no use had been found for them in professional mathematics. That suggests that Conway is interested in more than just the subjects deemed "interesting" in professional circles. In fact, Conway had said that anything that exhibited interesting behavior was worthy of mathematical investigation.

Perhaps then it should come as no surprise then that John Conway is a kind of unintended hero in the Large Number Community, and the inventor of the popular Chained Arrow Notation.

Although not considered any official part of mathematics, it seems reasonable to classify the study of large numbers, first dubbed googology by Andre Joyce and since then generally adopted by a small community of internet enthusiasts, as a branch of recreational mathematics. Recreational mathematics is itself a kind of nebulous classification. Recreational mathematics is usually reserved as an umbrella term for various mathematical games and puzzles. Although the study of large numbers is neither strictly a mathematical game or puzzle, it seems to have the recreational spirit of studying mathematical objects for entertainment rather than for some serious secondary purpose.

Conway's Book of Numbers could be thought of as a whole book of this kind of mathematics. It's not so much filled with games and puzzles, as just numbers for its own sake. The book was published in 1996, and it is here that Conway first introduced his extended latin -illion series (discussed here in Chapter 2.4), and his Chained Arrow Notation. The Chained Arrow Notation is introduced early in the book within a mere side topic aptly titled "Some Very Large Numbers", found on page 59 to 62. In the space of only 3 pages Conway covers Knuth arrows, Ackermann Numbers, Chain arrows, and Graham's Number.

The space devoted to large numbers in a book devoted to largely nothing but numbers I think gives a pretty good indication of how much attention the subject gets in professional circles. It really is nothing more than a diversion to most. Yet Conway's little excursion is actually generous in comparison. Your not likely to find a more earnest tour of large numbers for their own sake by a professional mathematician. I once described Knuth Up-arrow notation to my college instructor, and his first question was "So why are you interested in that?". I said that I was interested in finding the most compact way of expressing numbers of this size to which he responded "ah" as if he now understood. But the truth is that I find the sheer size of the numbers fascinating in itself, aside from any theoretical questions arising from them, and aside from all the beauty of the mathematical machinery required to make it possible, although all that stuff has appeal for me as well. For some ardent mathematical minds however it would seem, "so what" is the only rational response to largeness for its own sake. For these mathematicians, "Real mathematics" is about depth of analysis, not breath of imagination. But I don't think it's necessary to purge mathematics of our humanness and make it "hyper-rational". I'm quite sure that the things that first sparked many a would-be mathematician and scientist were precisely those awe-inspiring moments of clarity, and this is achieved all the more poignantly through our human imagination which adds a sense of grandeur to mere rational inquiry.

Conway's small text on Large Numbers is not filled with the wry humor and ecstatic descriptions of Jonathan Bower's, nor does it have the scholarly tone of Robert Munafo, but there is a sense of fun and lightness to it. The passage discussing "Chained Arrow" notation takes up less than a page. There are not even any worked out examples, just a definition. None the less, Conway's notation quickly caught on and is now part of the mythos of large numbers along with Edward Kasner's googol and googolplex, among many others.

In addition to introducing Chained Arrows, Conway also introduces a sequence based on them where the nth member of the sequence is a chain of n n's, the first few members of the sequence being:

1

2-->2

3-->3-->3

4-->4-->4-->4

etc.

Conway goes on to say that the first 3 numbers of this sequence correspond to the so called "Ackermann Numbers", but the 4th is much much larger than the 4th Ackermann number, and is in fact much much larger than Graham's Number. We will consider this sequence and how rapidly it grows as we work through examples.

Throughout the remainder of this article we will examine the original text, come up with a suitable formulation of Chained Arrows for our purposes, and then examine just how quickly these numbers grow. Finally I provide an assessment of their significance in googology and how potent the notation is in the greater scheme of things.

Conway's Chained Arrows : A look at the original Text

When first introducing something it seems sensible to me to go back to the source. However the source is not always the most concise or clear for understanding. Being that the "book of numbers" is a popular math book, and not a textbook, the definition provided for Chain arrows is, not surprisingly, informal. But for the sake of authenticity let's go back to the original source material.

The sub-section "Some Very Large Numbers" begins with the somewhat misleading justification:

" mathematicians sometimes need even larger numbers and want some way of writing them."

-- J.H. Conway, The Book of Numbers

I say misleading because most of the time mathematicians use numbers no bigger than those used by the sciences and on the rare occasion when they do use such fantastically large numbers it is usually highly theoretical.

After this opening, Conway goes over the basics of Knuth Up-arrow notation. We needn't reiterate it here ( you can check out my Up Arrows article for more information). He goes on to say that while Knuth Up-arrows were first devised in 1976, Wilhelm Ackermann had essentially discovered the same series of operators as far back as 1928. He then calls the following sequence the "Ackermann Numbers":

1^1

2^^2

3^^^3

4^^^^4

etc.

The first Ackermann number is 1, the second is 4, and the third is a power tower of 3's 7,625,597,484,987 terms high. Conway states that the fourth Ackermann number is so large that it is virtually impossible to comprehend it's immensity. He then goes through a lengthy description breaking the fourth Ackermann number down into tetra-towers expressing tetra-towers, just to make a point.

It is here that he introduces "Chained Arrow" notation:

"Our own 'chained arrow' notation names some even larger numbers. In this, a^^..^^b (with c arrows) is called a-->b-->c."

-- J.H. Conway, The Book of Numbers

One thing worth noting are that he refers to it as "our notation", not "my notation". The Book of Numbers was written in collaboration with Richard K. Guy. Whether or not he played any part in developing Chained arrow notation is unclear. It could merely be a literary device since the book was written in collaboration, or it may actually refer to them devising the notation together. It doesn't really provide much insight into when it was actually created. Is it something Conway and Guy came up with on their spare-time, or did they devise it specifically for the book? Who knows. Unfortunately I didn't get to ask the man who could easily have answered these questions, Conway himself.

The main motivation for the introduction of Chain Arrow notation seems to be shameless one-upmanship, a hallmark of any seasoned googologist, as if to say "if you thought Knuth's notation was impressive, we've got one better".

Chained arrow notation begins simply as an alternative shorthand for the Knuth up-arrow notation, essentially turning it into a 3-argument function. Note that the order of the arguments is "base", "polyponent", "operator". Other notations, such as the popular 3-argument hyper-operator, have eschewed this order, but I believe the order seen here is actually the most natural, since it's in ascending order of strength (It's also the order Jonathan Bower's chose for his array notation, which we'll learn about in Section IV).

In descriptions of Chained arrow notation it's common to mention that if there are more than 3 arguments, that a set of special rules must apply. The original text however is not nearly so descriptive. It merely assumes that the reader understands that Chained arrows is an arbitrary argument function. The very next line states:

" a-->b--> ... --> x --> y --> 1 is another name for a-->b--> ... --> x --> y " -- J.H. Conway

The implication being simultaneously that you can have any number of arguments in the chain, and that if the last argument is a 1 then you can simply drop it. Simple enough. The next instruction however is not quite so simple:

" and a ... x-->y-->(z+1) is defined to be

a ... x if y =1,

a... x -->(a...x)-->z if y =2,

a ... x-->(a...x-->(a...x)-->z)-->z if y = 3,

and so on. The parenteses here may be rubbed out after the numbers inside them have been completely evaluated. " -- J.H. Conway

As a rule I try to avoid these "by-example" definitions, but being that this is a book for laymen it's not surprising. In any case to devise a concise and formal definition of Conway's Chained Arrow notation requires some interpretation. We aren't told for example what to make of something like a-->b. The accepted notion is that a-->b = a^b, assuming backwards compatibility for the rules so that a-->b = a-->b-->1 = a^b with a single up-arrow.

Most standard explanations of Chained arrows begin by explicitly defining chains of 1,2 or 3 arguments:

Let a = a

Let a-->b = a^b

Let a-->b-->c = a^^...^^b (w/c ^s)

As it so happens however, 3-argument chains can be described as an extension of 2-argument chains by following the special rules. This is not a coincidence, as I'll explain later. The greatest difficultly lies in interpreting the last rule given. We can however define this recursively.

Firstly consider that when y=1, this simply means that the second to last argument = 1. We are instructed that:

a-->b--> ... -->x-->1-->(z+1) = a-->b--> ... -->x

In other words, if the 2nd to last argument = 1, drop the last two arguments regardless of the value of z.

If y = 2 we get the seemingly complicated result that:

a-->b--> ... -->x-->2-->(z+1)

= a-->b--> ... -->x-->(a-->b--> ... --> x)-->z

Notice that the chain within the parentheses is really just a-->b--> ... -->x-->1-->(z+1), which is exactly like the original expression except that y has been decremented by 1. The last argument has also been decremented by 1.

Moving on, if y = 3 we get:

a-->b--> ... -->x-->2-->(z+1)

= a-->b--> ... -->x-->(a-->b--> ... --> x-->(a-->b--> ... -->x)-->z)-->z

Now notice that the area highlighted in red is identical to the expanded form of the the y=2 case. So we can see that what's happening is that the 2nd argument is being replaced with the entire chain with the 2nd to last argument decremented by one, and the last argument is simply being decremented by 1. We can now generalize all the examples into two simple rules:

if y=1 then a-->b--> ... -->x-->1-->z = a-->b--> ... -->x

if y>1 then a-->b--> ... -->x-->y-->z = a-->b--> ... -->x-->(a-->b--> ... -->x-->(y-1)-->z)-->(z-1)

Notice my use of the -1s results instead of +1s in the initial expression. This is merely a personal preference on my part but it does not effect the definition.

These two rules on their own are not enough to properly define chain arrows. We must also include rules for the shortest chains:

a = a

a-->b = a^b

Lastly we must take into account Conway's rule of dropping the last 1. Together this amounts to 5 rules necessary to define Conway's Chained Arrows. There is a certain degree of variation in how Chained Arrows have been defined, although these definitions are equivalent, but let's write out the rules according to our 5 rule definition:

Def. Conway's Chained Arrow Notation

Let C = a1-->a2--> ... --> a(n-2) --> a(n-1) --> an where a1~an are positive integers and n is a positive integer:

1. if n=1; C = a1

2. if n=2; C = a1^a2

3. if an=1; a1-->a2--> ... --> a(n-2) --> a(n-1) --> 1 = a1-->a2--> ... --> a(n-2) --> a(n-1)

4. if a(n-1)=1; a1-->a2--> ... --> a(n-2) --> 1 --> an = a1-->a2--> ... --> a(n-2)

5. a1-->a2--> ... --> a(n-2) --> a(n-1) --> an =

a1-->a2--> ... --> a(n-2) -->(a1-->a2--> ... --> a(n-2) --> (a(n-1) -1) --> an) --> (an -1)

These 5 rules formalize Conway's system. Notice that I make no use of Knuth up-arrow notation in the definition. Generally speaking, the base case of a recursive function should be as simple as possible, and preferably something that is a common operation (such as exponentiation or factorials). This way the reader does not have to consult another notation just to understand this one.

We needn't consider cases of 1 or 2 arguments as these will produce the by now familiar exponential numbers, but the first thing we should do is check if these rules do indeed produce the hyper-operators with 3-argument chains. We'll begin with an example first, and then we'll prove the general case.

For starters let's try 3-->3-->2. First we check Rule 1. Since n>1 it can not apply. Next we check Rule 2, but since n>2 it also can not apply. Rule 3, is the last argument = 1. No. So we move on to rule 4, is the 2nd to last argument equal to 1. No. So rule 5 must apply. Although it might be somewhat tricky to understand what's happening in Rule 5, all it's basically doing is decrementing the last argument, and replacing the 2nd to last with the entire chain with it's 2nd to last argument decremented. These decrements are necessary so that the function eventually terminates.

So applying Rule 5 we have:

3-->3-->2 = 3-->(3-->2-->2)-->1

As you can easily see with the highlight colors, only the last two arguments are being modified. Everything else in the chain remains the same. We next evaluate the inner chain 3-->2-->2. Again rules 1 through 4 can not apply because there are more than 2 arguments and none of them = 1. So applying Rule 5 and second time we obtain:

3-->(3-->2-->2)-->1 = 3-->(3-->(3-->1-->2)-->1)-->1

Next we evaluate the innermost chain, 3-->1-->2. Again Rules 1,2 and 3 don't apply, but 4 does because the 2nd to last argument = 1. We are told to simply drop the 1 and every argument after it:

3-->(3-->(3-->1-->2)-->1)-->1 = 3-->(3-->(3)-->1)-->1

With the innermost chain resolved we now move up one level to the expression 3-->3-->1. Checking the rules we find that the first that applies is Rule 3 which tells us to drop the 1:

3-->(3-->3-->1)-->1 = 3-->(3-->3)-->1

We aren't done with the chain however. We now have a 2-argument chain, so rule 2 applies:

3-->(3-->3)-->1 = 3-->(3^3)-->1 = 3-->(27)-->1

Finally we return to the outermost level with 3-->27-->1. First Rule 3 applies and we drop the 1: 3-->27. Then Rule 2 applies: 3^27, and then we can finally evaluate: 3^27 = 7,625,597,484,987.

Now let's compare that to the expected value:

3-->3-->2 = 3^^3 = 3^3^3 = 3^27 = 7,625,597,484,987

So the value came out as expected. While this is encouraging, this in itself doesn't prove that our definition is equivalent. To do that we need to use mathematical induction. Firstly observe that according to our rules:

a-->b-->1 = a-->b = a^b

This agrees with Conway's definition. Now say for argument sake that:

their exists a k such that a-->b-->k = a^^...^^b (w/k ^s),

does this imply that a-->b-->(k+1) = a^^...^^^b (w/k+1 ^s)?

Begin with a-->1-->(k+1). According to our rules we would say a-->1-->(k+1) = a. Note that a^^..^^1 = a regardless of the number of up-arrows, so this would also correspond to Conway's definition.

Now again for argument sake assume their exists a j such that a-->j-->(k+1) = a^^...^^j (w/k+1 ^s). We now prove that this implies a-->(j+1)-->(k+1) = a^^...^^(j+1) (w/k+1 ^s):

a-->(j+1)-->(k+1) = a-->(a-->j-->(k+1) )-->k [via Rule 5]

a-->(a-->j-->(k+1) )-->k = a-->( a^^...^^j (w/k+1 ^s) )-->k [By definition]

a-->( a^^...^^j (w/k+1 ^s) )-->k = a^^..(k up-arrows)..^^( a^^..(k+1 up-arrows)..^^j) [By definition]

a^^..(k up-arrows)..^^( a^^..(k+1 up-arrows)..^^j) = a^^..(k+1 up-arrows)..^^(j+1)

Thus by Double induction we've shown that our rules are equivalent to Conway's definition for 3-argument chains. This also proves that our definition is equivalent in general since all larger chains are built up from smaller chains using the same rules.

There is one more issue that should be addressed before we go on with the example chains. When I originally learned about chain arrows, I was told that a chain can always be truncated after a 1. Note however that Conway never explicitly states this in his definition. Can we prove that this must be so?

To do this begin by imagining a Chain with a 1 somewhere in the middle which is neither the last nor 2nd to last argument:

a-->b--> ... -->j-->1-->k--> ... -->x-->y-->z

Note that this expression would take a very long time to terminate. The last argument "z" would first have to be reduced to 1 before it could be removed. By this time the 2nd to last argument "y" would have swelled to enormous size. This would then become the last argument and would begin to decrement towards 1 while "x" would swell to an even more staggering size. This would continue until at last the "1" between j and k were itself the 2nd to last argument. At this point the rest of the chain would get truncated and a through j would remain uneffected. So alternatively we could replace rules 3 and 4 with a single rule that says to drop all arguments past a 1.

Lastly note that if a1=1 then the entire chain must equal 1, since ultimately it must result in 1^N, and no matter how large N was 1^N=1.

4-argument Chain Arrow Expressions

Now that we have a working definition for Conway's chained arrows, let's begin exploring the kinds of numbers they can generate. One of the main reasons the notation is popular is because it goes further than other commonly known large number notations like Knuth Up-arrows and Steinhaus polygons. It even makes it easy to define numbers larger than Graham's Number!

We needn't go through examples involving 3-argument chains. This will simply produce the hyper-operator class numbers (referred to as trientricals by Jonathan Bowers). We already saw numbers of that caliber in the Up-Arrow article. Where Chain arrows begin to show their metal is with 4-arguments. So we begin with the smallest non-trivial 4-argument chain. This chain can't contain any 1's since this would result in a shorter chain, or a result of 1. So what about:

2-->2-->2-->2

Evaluating this using the rules established we get:

2-->2-->2-->2

= 2-->2-->(2-->2-->1-->2)-->1

= 2-->2-->(2-->2) = 2-->2-->(2^2) = 2-->2-->4 = 2^^^^2 = 2^^^2 = 2^^2 = 2^2 = 4

You may recall that 2^^..(c up-arrows)..^^2 = 4 regardless of what the value of c is. Thus in order to get a non-trivial value, at least one of the first two arguments must be greater than 2. So let's try:

2-->3-->2-->2

= 2-->3-->(2-->3-->1-->2)-->1

= 2-->3-->(2-->3) = 2-->3-->(2^3) = 2-->3-->8 = 2^^^^^^^^3 = 2^^^^^^^2^^^^^^^2 =

2^^^^^^^4 = 2^^^^^^2^^^^^^2^^^^^^2 = 2^^^^^^2^^^^^^4 = ... =

2^^^^^^2^^^^^2^^^^2^^^2^^65,536

This number is already ridiculously large if you'll recall the up-arrow article. It's already far too large to describe as a power-tower, tetra-tower, penta-tower etc. Let's try another example:

3-->2-->2-->2

= 3-->2-->(3-->2-->1-->2)-->1 = 3-->2-->(3-->2) = 3-->2-->9 = 3^^^^^^^^^2 = 3^^^^^^^^3 =

3-->3-->8

This example ends up being just "a little" bigger than our previous one since 2-->3-->8 < 3-->3-->8.

Commonly examples set the first two arguments to 3 (to match Graham's Number perhaps). Here is a typical example:

3-->3-->2-->2 = 3-->3-->(3-->3) = 3-->3-->27 =

3^^^^^^^^^^^^^^^^^^^^^^^^^^^3

This number already goes beyond the progression of operations tetration, pentation, hexation, heptation, octation , ... we'd have to go all the way up to icosi-ennation. This is also a very tame example. Let's look at some further examples where the first two arguments are larger:

4-->4-->2-->2 = 4-->4-->(4-->4) = 4-->4-->(4^4) = 4-->4-->256 (yikes!)

5-->5-->2-->2 = 5-->5-->(5-->5) = 5-->5-->(5^5) = 5-->5-->3125 (yikes!!)

6-->6-->2-->2 = 6-->6-->(6-->6) = 6-->6-->(6^6) = 6-->6-->46,656 (Super Yikes!!!)

Now just try to remember how much increasing by one operator level increased the size of the numbers. An entire article was spent just going up to octation. Now imagine going up hundreds, thousands, or even tens of thousands of operation levels! It was a struggle just to imagine something like 3^^^^3, but now we're talking about numbers like 6^^^^^^^^^^^^^^^... ... ... ...^^^^^^^^^^^^6 (w/46,656 ^s). It's impossible for us to appreciate what that means in terms of size. And guess what ... I've only been toying around. Just wait and see what happens when I change the 2nd to last argument to 3 ...

3-->3-->3-->2 =

3-->3-->(3-->3-->2-->2)-->1 =

3-->3-->(3-->3-->(3-->3)) =

3-->3-->(3-->3-->27) =

3^^^^^^^^^^^^...................(3^^^...(27 ^s)...^^^^^^^^3 ^s) .............^^^^^^^^^^^^^^^^^^3

?!?! There's now 3-->3-->27 up-arrows. If we can't even comprehend a number like 3-->3-->27 we might as well give up understanding something like 3-->3-->3-->2. But chain arrows makes it easy to take these ideas even further. So clearly an even bigger number would be to have 3-->3-->3-->2 up-arrows. We can achieve repeated substitution for the number of up-arrows by simply increasing the 2nd to last argument in the chain:

3-->3-->4-->2 = 3-->3-->(3-->3-->3-->2)

3-->3-->5-->2 = 3-->3-->(3-->3-->4-->2)

3-->3-->6-->2 = 3-->3-->(3-->3-->5-->2)

etc.

If we let f(n) = 3-->3-->n-->2, then f(n) has asymptotic growth rate equivalent to G(n), Graham's function. We can also bound Graham's Number using f(n). Note that:

3-->3-->1-->2 = 3-->3 = 3^3 < 3^^^^3 = G(1) = 3-->3-->4 < 3-->3-->27 = 3-->3-->2-->2

So...

3-->3-->1-->2 < G(1) < 3-->3-->2-->2

Furthermore...

If for some k : 3-->3-->k-->2 < G(k) < 3-->3-->(k+1)-->2

Then...

G(k+1) = 3-->3-->G(k) < 3-->3-->(3-->3-->(k+1)-->2) = 3-->3-->(k+2)-->2

and

3-->3-->(k+1)-->2 = 3-->3-->(3-->3-->k-->2) < 3-->3-->G(k) = G(k+1)

Therefore:

3-->3-->n-->2 < G(n) < 3-->3-->(n+1)-->2

So we conclude that G(64), Graham's Number, is:

3-->3-->64-->2 < Graham's Number < 3-->3-->65-->2

So chain arrows already allow us to very easily express numbers in the range of Graham's Number. You might be tempted to jump to something like 3-->3-->100-->2 or 3-->3-->10100-->2. This would bring you beyond Graham's Number, but this would barely scratch the surface of what a 4-argument chain ending in a 3 could achieve. Take the following examples:

3-->3-->2-->3 = 3-->3-->(3-->3)-->2 = 3-->3-->27-->2

3-->3-->3-->3 = 3-->3-->(3-->3-->2-->3)-->2 = 3-->3-->(3-->3-->27-->2)-->2 = ?!??!?!?!?!?!?!?!

3-->3-->3-->3 is a number Jonathan Bower's onced referred to as "Conway's three-three-three-three". For brevity we can call it "Conway's tetratri". It is often given as an example of a number even greater than Graham's Number. To get a rough idea of it's size relative to Graham's Number, let's return to it's definition.

Recall:

3-->3-->n-->2 << G(n) << 3-->3-->(n+1)-->2

Thus we have:

G(26) << 3-->3-->27-->2 = 3-->3-->2-->3

and

G(G(26)) << 3-->3-->(G26+1)-->2

To simplify this expression we observe that:

G1+1 < G1+G1 = 2*G1 < 3*(3^^^^3) < 3^(1+3^3^...^3) < 3^3^(1+3^3^...^3) < ... <

3^3^...^3^4 < 3^3^...^3^27 = 3^^(1+3^^3^^...^^3) < ... < 3^^3^^...^^3^^4 <

3^^3^^...^^3^^7,625,597,484,987 = 3^^^(1+3^^^3) < 3^^^3^^^4 < 3^^^3^^^3^^^3 = 3^^^^4

< 3^^^^3^^^^3 = 3^^^^^3 = 3-->3-->5 < 3-->3-->27 = 3-->3-->2-->2

This technique is what I refer to as the "ascending one". Basically we can use it to show that:

1+a-->b-->c < a-->(b+1)-->c : a>1

I won't attempt to prove this here, but the ascending one technique will be examined more fully in Chapter 3.3. In any case we can go on to say that:

If for some k:

Gk+1 < 3-->3-->(k+1)-->2

then

(Gk+1)+1 = 1+3-->3-->Gk

< 3-->4-->Gk [Ascending One]

< 3-->(3-->3-->(Gk+1))-->Gk [4 < 3-->3-->n for any n]

= 3-->3-->(Gk+1) [Definition of Chain Arrows]

< 3-->3-->(3-->3-->(k+1)-->2) [for some k]

= 3-->3-->(k+2)-->2 [Definition of Chain Arrows]

Therefore:

Gn+1 < 3-->3-->(n+1)-->2

for all n

Returning to our initial problem we have:

G(G(26)) << 3-->3-->(G26+1)-->2 << 3-->3-->(3-->3-->27-->2)-->2 = 3-->3-->3-->3

Thus we conclude that 3-->3-->3-->3 is larger than G(G(26)). This is far larger than Graham's Number. Note that even G(3^^3) is larger than Graham's Number and this is still much smaller than G(G(1)) which is way way smaller than G(G(26)). We can also provide an upper bound for Conway's tetratri. Simply observe:

3-->3-->27-->2 << G(27)

3-->3-->3-->3 = 3-->3-->(3-->3-->27-->2)-->2 << G(3-->3-->27-->2) << G(G(27)).

So...

G(G(26)) < 3-->3-->3-->3 < G(G(27))

Going even further we can say:

3-->3-->4-->3 = 3-->3-->(3-->3-->3-->3)-->2 >> G(G(G(26))) = G3(26)

3-->3-->5-->3 = 3-->3-->(3-->3-->4-->3)-->2 >> G(G(G(G(26)))) = G4(26)

3-->3-->6-->3 = 3-->3-->(3-->3-->5-->3)-->2 >> G(G(G(G(G(26))))) = G5(26)

So we can say that 3-->3-->100-->3 is approximately G99(26).

Put simply, when the last argument in a 4-argument chain = 3, we end up feeding Graham's function back into itself as the 2nd to last argument is increased. For convenience let's establish the following generalized notation:

Let

"G(n)a,b" be defined as follows:

if n=1 then G(1)a,b = a^^^...^^^a w/b ^s

else G(n)a,b = a^^^...^^^a w/G(n-1)a,b ^s

In this notation "a,b" is used to specify the base case. In this notation we can say that:

Graham's Number = G(64)3,4

The notation is more versatile than this however and will allow us to express exactly the numbers that are being produced by the chain arrows. For example:

3-->3-->1-->2 = 3-->3 = 3^3 = G(1)3,1

3-->3-->2-->2 = 3-->3-->(3-->3) = 3-->3-->27 = G(2)3,1

3-->3-->3-->2 = 3-->3-->(3-->3-->2-->2) = 3-->3-->(G(2)3,1) = G(3)3,1

...

3-->3-->n-->2 = G(n)3,1

We can go further and now look at what happens when the last argument = 3:

3-->3-->1-->3 = 3-->3 = G(1)3,1

3-->3-->2-->3 = 3-->3-->27-->2 = 3-->3-->(G(1)3,1)-->2 = G(G(1)3,1)3,1 = G2(1)3,1

3-->3-->3-->3 = 3-->3-->(3-->3-->2-->3)-->2 = 3-->3-->(G2(1)3,1)-->2 = G3(1)3,1

...

3-->3-->n-->3 = Gn(1)3,1

By the time we reach 3-->3-->100-->3 Graham's Number has shrunk to a microscopic dot, and we haven't even exhausted 4-argument chains yet. Let the last argument = 4:

3-->3-->1-->4 = 3-->3 = G(1)3,1 = 27

3-->3-->2-->4 = 3-->3-->27-->3 = GG(1)3,1(1)3,1

3-->3-->3-->4 = 3-->3-->(3-->3-->27-->3)-->3 = G3-->3-->27-->3(1)3,1

3-->3-->4-->4 = 3-->3-->(3-->3-->3-->4)-->3 = G3-->3-->3-->4(1)3,1

3-->3-->5-->4 = 3-->3-->(3-->3-->4-->4)-->3 = G3-->3-->4-->4(1)3,1

etc.

At this point it can be a little difficult to get a handle on what is happening. Here is an easier way to think about it. Note that:

3-->3-->2-->3 = 3-->3-->27-->2 >> G(26) >> G(4) = G(G(0)) : G(0) = 4

3-->3-->3-->3 >> G(G(26)) >> G(G(G(0)))

So now in general we can say:

3-->3-->n-->3 >> GGG...GGGG0

where there are n "G"s in a row (parenthesis have been removed for simplicity). So we have:

3-->3-->1-->4 = 27

3-->3-->2-->4 = 3-->3-->27-->3 >> GGG...GGG0 w/27 Gs

3-->3-->3-->4 >> GGGG...GGGG0 w/3-->3-->2-->4 Gs

3-->3-->4-->4 >> GGGGG...GGGGG0 w/3-->3-->3-->4 Gs

etc.

So when the last argument = 4 in a 4-argument chain, we get "G-towers" describing the height of G-towers. But we aren't done. When the last argument = 5, we then recursively plug back into this process. So then we would have as many G-towers describing G-towers as some previous result of G-towers describing G-towers. We are now reaching the limit of what is easily expressed in terms of the G-function. Here are some examples:

3-->3-->1-->5 = 27

3-->3-->2-->5 = 3-->3-->27-->4

3-->3-->3-->5 = 3-->3-->(3-->3-->27-->4)-->4

3-->3-->4-->5 = 3-->3-->(3-->3-->3-->5)-->4

etc.

The numbers grow very rapidly as the 4-argument is incremented. One way we can understand this is to define a hierarchy of G-functions. Let G1(n) = G(n), Gk(1) = 26, and Gk(n) = Gk-1(Gk(n-1)). We already established that:

3-->3-->n-->3 >> Gn-1(26)

By the given definitions:

Gn-1(26) = G2(n)

Next we look at:

3-->3-->1-->4 = 3-->3 > 26 = G3(1)

3-->3-->2-->4 >> 3-->3-->G3(1)-->3 >> G2(G3(1)) = G3(2)