P12_weak

Weak, Mixed & Other alternative Operators

############################################################

############################################################

NON_PUBLIC

UNDER CONSTRUCTION 2012

############################################################

############################################################

Introduction

In the previous article we we're introduced to the hyper-operators and Knuth's up-arrow notation. When extending the operators beyond exponentiation a tacit assumption was made: that the natural continuation of the operators must follow from right to left resolution. Although this choice has the advantage of making the results larger it is not a given that this is the only logical continuation. What would happen if instead of continuing using right to left resolution we used left to right resolution instead? What kinds of operators would result? The resulting operators I personally refer to as the weak operators, because of their relative strength to the ordinary hyper-operators. The ordinary hyper-operators can be re-designated the strong operators. Just as we have tetration, pentation, hexation, etc. we can now also have weak tetration, weak pentation, weak hexation, etc. How do these weak operators stack up against the strong operators? In this article we'll explore the values that result from this alternative set of hyper-operators, how to compute them, and how they compare to the strong operators. Later we'll explore and even greater generalization of the operators I call mixed strength operators.

Robert Munafo's Circle Operators

The weak-operators arise quite naturally, just as the hyper-operators do. A case in point is a notation that Robert Munafo invented as a kid of 12. Much like myself Munafo seems to have taken to large numbers from a very early age and quickly worked his way up from knowing the -illions to understanding the fundamentals of recursive functions.

As Munafo recounts on his own site, he learned about 100 at age 5, knew about 1,000,000 at age 6, and learned about a billion (1E9) and trillion (1E12) at age 7. By age 8 he had learned about the vigintillion (1E63) from the school library. It instantly became a favorite large number of his. He also managed to count up to 35,000 during his spare time before giving up with the whole endeavor.

From the ages of 10-12 Munafo says he "re-invented" the dyadic operators. What Munafo had actually done was devise his own notation for extending exponentiation, by following the pattern established by the lower operations and resolving expressions from left to right. According to Munafo himself, he was aware of the right-associative (strong) operators at the time, but chose to work with the left-associative (weak) operators because he felt the numbers were more manageable.

So what exactly was this notation? In a email correspondence he explained to me its development.

Recall that he can define multiplication as:

b*p = b+b+b+ ... +b+b w/p b's

It is usually customary to resolve additions from left to right, not the other way around. Thus:

3*4 = 3+3+3+3 = 6+3+3 = 9+3 = 12

Exponentiation can be defined as:

b^p = b*b*b* ... *b*b*b w/p b's

Again order of operations normally dictates that we work from left to right. Thus:

3^4 = 3*3*3*3 = 9*3*3 = 27*3 = 81

This brings us to Munafo's first hyper-operator, "weak tetration". He decided to use an operator symbol formed by overlapping a multiplication cross with a plus sign, forming an 8-pointed star:

It's tempting to interpret this perhaps as a kind of double-multiplication, since this would be the 4th operator in the sequence. This "double-multiplication" sign would be placed between two arguments, just as with addition and multiplication, and it would expand out into a left-associative series of exponentiations:

This operator is more powerful than exponentiation, but less powerful than tetration. Munafo says at age 13 he went on to the next operator, weak pentation. For this operator he used a circle with a plus inscribed inside:

As a side note this is also a planetary symbol often used to represent "earth". As a symbol for the 5th operator, we might interpret the circle as representing 4 operation levels, with the plus representing the 5th. Just as before this symbol is placed between two arguments, and the expression expands into a left-associative field of 8-pointed star operators:

Munafo says sometime later, during the same year he decided to generalize his notation so that it could be extended indefinitely. All of Munafo's operators would be replaced by a circle with a number inscribed inside representing the operators order. Addition would be 1 inside a circle, multiplication 2 inside a circle, exponentiation would be 3 inside a circle, the 8-pointed star operator became 4 in a circle, and the plus in a circle became 5 in a circle. Better yet Munafo could now extend at will with 6 in a circle, 7,8,9, etc. Thus Munafo had discovered his own operator hierarchy just as I had done with my polygons back in grade school.

Here are Munafo's operators:

Since these symbols are not ASCII friendly I will be using (n) to represent n inside a circle. Thus we have:

b(1)p = b+p

b(2)p = b*p

b(3)p = b^p

b(4)p = b(3)b(3) ... (3)b(3)b w/p b's

b(5)p = b(4)b(4) ... (4)b(4)b w/p b's

etc.

Shortly after this Munafo decided to use a raised circle to represent the strong-operators, and a lowered circle to represent the weak-operators. Thus the strong operators can be represented as:

b(4)p = b^^p

b(5)p = b^^^p

b(6)p = b^^^^p

etc.

and the weak operators can be represented as:

b(4)p = "weak tetration"

b(5)p = "weak pentation"

b(6)p = "weak hexation"

etc.

Munafo is not the only one who has devised the weak operators independently. You may recall that Alistair Cockburns fuga(n) was essentially the same thing as n(4)n. That is, fuga(n) = n weakly tetrated to the n:

fuga(n) = ((...((n^n)^n)...)^n)^n w/n n's

I have also had some interest in the weak-operations, especially the one's past weak-tetration. I devised my own notation, which can be generalized to create an entire tree of functions, as we will see later in the article. Clearly, just as the hyper-operators have been re-invented time and time again, so have the weak-operators. So it would seem that they are both logical extensions of exponentiation.

Down-Arrows

Now let me present my own notation for the weak-operators. Sometime after 2004, when I had re-entered the subject of large numbers, I came up with an idea for how to express the weak-operators that seemed quite natural. Since the strong operators are represented with up-arrows, I figured the most sensible thing was to represent the weak-operators using "down arrows". Thus weak-tetration would be:

b↓↓p

Because this is sometimes tricky to write on computers we can use the following convenient ASCII form when needed:

bvvp

Here I am using lower case "v"s to represent down-turned carets. Just as expressions using up-arrows are always evaluated from right to left, regardless of the ranks of the operations, an expression using down-arrows is always evaluated from left to right, regardless of rank.

Thus we can say:

b↓↓p = b↓b↓b↓ ... b↓b↓b w/p b's

Interestingly it turns out that:

b↑p = b↓p

However when a series of exponents is written using up-arrows we will resolve from right to left, and when a series of exponents is written using down-arrows we will resolve from left to right. So as a simple example we have:

3↓↓4 = 3↓3↓3↓3 = 27↓3↓3 = 19,683↓3 = 7,625,597,484,987

&

3↑↑4 = 3↑3↑3↑3 = 3↑3↑27 = 3↑7,625,597,484,987 ~ 10^(3.63833*10^12)

We can also generalize this to any number of down-arrows, just as Knuth did for up-arrows:

b↓↓p = b↓b↓b↓ ... b↓b↓b w/p b's

b↓↓↓p = b↓↓b↓↓b↓↓ ... b↓↓b↓↓b w/p b's

b↓↓↓↓p = b↓↓↓b↓↓↓b↓↓↓ ... b↓↓↓b↓↓↓b w/p b's

etc.

Where all of these expressions are left-right resolution. A more formal definition can also be used if we let:

b↓cp = b↓↓↓...↓↓↓p w/p b's

We can then use the following formal definition, where down arrows it treated as a 3 parameter function:

Formal Definition of Weak Operators

Let bcp be defined when b,c,p are positive integers as follows:

1. c=1; b1p = b^p

2. p=1; bc1 = b

3. bcp = (bc(p-1))(c-1)b

This formal definition will define the same functions as the informal ones. Now that we've established the down-arrow notation, let's find out what kinds of numbers the weak-operators produce.

Computing Weak-Tetration

We'll begin with the smallest of the weak-operators: weak-tetration. Before getting into devising a general formula it is usually a good idea to work out some examples first to get a feel for what's going on, and then generalize it.

Let's begin by getting the trivial cases out of the way first.

By definition:

b↓↓1 = b

since we would only have 1 copy of b with no operators between anything. What if b=1:

1↓↓p = 1↓1↓ ... ↓1↓1 w/p 1s = ((...((1^1)^1)...)^1)^1 = 1

So far this is identical to the behavior of strong-tetration. The smallest non-trivial example would then be 2↓↓2. Let's see what happens:

2↓↓2 = 2↓2 = 2^2 = 2*2 = 2+2 =4

This should be a familiar pattern. We can see that just like the hyper-operators, when both arguments are 2 the weak-operators will eventually reduce to 4. So what happens if we increase the "weak-tetrate", p, to 3:

2↓↓3 = 2↓2↓2 = (2^2)^2 = 4^2 = 16

Interestingly 2^(2^2) = (2^2)^2. So 2↑↑3 = 2↓↓3. Does this trend continue:

2↓↓4 = 2↓2↓2↓2 = 4↓2↓2 = 16↓2 = 256

Recall that 2↑↑4 = 2↑2↑2↑2 = 224 = 216 = 65,536. So 2↓↓4 < 2↑↑4. This trend will continue, with weak-tetration falling further and further behind strong tetration:

2↓↓5 = (2↓↓4)2 = 2562 = 65,536 < 2↑↑5 = 265,536

2↓↓6 = [2↓↓5]2 = 65,5362 = 4,294,967,296 < 2↑↑6 = 2(265,536)

etc.

Compared to strong-tetration, weak-tetration seems to be plodding. By ordinary standards however weak-tetration base-2 is already grows really fast. You'll notice that each time the weak-tetrate increases by 1, we are squaring the previous result. Converting the results into ordinary power of two we find that:

2(4)1 = 2^1

2(4)2 = 2^2

2(4)3 = 2^4

2(4)4 = 2^8

2(4)5 = 2^16

2(4)6 = 2^32

etc.

Interestingly we find that the exponent is itself growing exponentially. This is what we can refer to as exponential exponential growth, or simply double-exponential growth. A simple formula governs the results. We find that:

2↓↓n = 2^2^(n-1)

The repeated squaring means that the number of digits in the results will roughly double every time the weak-tetrate increases by 1. It is a simple matter to prove that the square of any n digit number must have 2n or less digits. To prove this observe that any n digit positive integer, k, must be bounded by:

999 ... 999 w/n-1 9s < k < 1000 ... 000 w/n 0s

It follows that:

k < 10^n

implies that...

k^2 < 10^2n

10^2n contains 2n+1 digits, but any integer less than 10^2n must contain 2n digits or less. Since k^2 is less than 10^2n and must also be an integer it follows that k^2 must have 2n digits or less.

If you need additional proof, just consider the largest example of an n digit number and see if the statement holds:

9^2 = 81 (square has twice the digits)

99^2 = 9801 (square has twice the digits)

999^2 = 998,001 (square has twice the digits)

etc.

So even in the absolute best case scenario, an n digit number just barely fails to make it past 2n digits!

We can also prove that k^2 must have at least 2n-1 digits. Simply observe that:

10^(n-1) =< k

which implies that...

10^(2n-2) =< k^2

Again 10^(2n-2) must have 2n-1 digits. Thus no matter what number we're squaring or how many digits it has, we can rest assured that it will either have exactly twice the number of digits as the original number, or one less.

It is even possible to determine when this should happen. To find the smallest n digit number for which it's square has 2n digits we simply take the square root of the smallest 2n digit number, which would be 10^(2n-1). Thus we have:

sqrt(10^(2n-1)) = 10^(n-1/2) = (10^n)/sqrt(10) ~ 3.162*10^(n-1)

This means that if the leading digit is greater than 3, then the number of digits will necessarily double. If it is less than 3 it will be one less than double. If the leading digit is a 3, we can determine whether or not the number of digits will exactly double by inspecting the decimal expansion of 1/sqrt(10).

So we can say that the number of digits is doubling with base-2 weak-tetration. Here is a table of values:

All the predictions pan out for the first ten members of this sequence. 2 has one digit, and it's square has one digit because 2 < 3.162. 4 has one digit, but it's square has two because 4 > 3.162. 16 has two digits, but it's square has 3 since 16 < 31.62. 256 has three digits but it's square has 5 because 256 < 316.2. One can continue down the list to confirm that in every case the predictions prove to be true.

It won't take long for 2↓↓n to get too large too compute. Since the number of digits is doubling with each term, the number of digits will quickly run beyond what we could store on a computer. By 2↓↓20 the number of digits will be 157,827, already pretty large though certainly not beyond the ability of computers to compute and store. However by 2↓↓30 the number of digits will reach 161,614,249, which is beyond what I can compute. This would already take about 100 MiB. Still it could be computed and stored. 2↓↓40 would have 165,492,990,272 digits! Now it would take 100 GiB to store and would take up a sizable chunk of a hard drive. By 2↓↓50, at least a 100 TiB would be needed to store the number! Before we reached 2↓↓100, we would exceed all the computer memory in the world!

And yet, this is A LOT slower than 2↑↑n which exceeds all the memory in the world already by the 6th term! In fact 2↑↑6 would require more memory than could be stored in the observable universe, and is already bigger than a googolplex.

What if we use base-3? Will weak-tetration grow significantly faster?

Let's see:

3↓↓1 = 3

3↓↓2 = 3↓3 = 27

3↓↓3 = 3↓3↓3 = 27↓3 = 19,683

3↓↓4 = 3↓3↓3↓3 = 27↓3↓3 = 19,683↓3 = 7,625,597,484,987

3↓↓5 = (3↓3↓3↓3)↓3 = 7,625,597,484,987↓3 =

443,426,488,243,037,769,948,249,630,619,149,892,803

3↓↓6 = (3↓3↓3↓3↓3)↓3 =

443,426,488,243,037,769,948,249,630,619,149,892,803↓3 =

87,189,642,485,960,958,202,911,070,585,860,771,696,964,072,404,731,750,085,525,219,

437,990,967,093,723,439,943,475,549,906,831,683,116,791,055,225,665,627

It can definitely be seen that it is growing faster. 2↓↓6 was only 4,294,967,296. In fact, every new term in the sequence is the cube of the previous term! This also means that the number of digits is roughly tripling every time. This can again be proven simply using the same argument we used for 2↓↓n. There is one slight difference however, because the cube of a n digit number may have less than 3n-1 digits. To see this again observe that, if k is an n digit number then:

10^(n-1) =< k < 10^n

implies

10^(3n-3) =< k^3 < 10^3n

So k^3 must have 3n-2, 3n-1, or 3n digits. A more general pattern can now be observed. The mth power of k will always have at least m(n-1)+1 digits, and at most mn digits:

10^(n-1) =< k < 10^n

implies

10^(mn-m) =< k < 10^mn

implies

k has at least mn-m+1 digits and at most mn digits

This pattern can be seen in the examples of base-3 weak-tetration:

3↓↓1 = 3 has 1 digit

3↓↓2 = 27 has 2 digits, which is 3x1-1.

3↓↓3 = 19,683 has 5 digits, which is 3x2-1.

3↓↓4 = 7,625,597,484,987 has 13 digits, which is 3x5-2.

3↓↓5 = 443,426,488,243,037,769,948,249,630,619,149,892,803 has 39 digits, which is 3x13.

etc.

Let's see what 4↓↓n looks like:

4↓↓1 = 4 (1 digit)

4↓↓2 = 4↓4 = 256 (4x1-1 digits)

4↓↓3 = 4↓4↓4 = 256↓4 = 4,294,967,296 (4x3-2 digits)

4↓↓4 = 4↓4↓4↓4 = 256↓4↓4 = 4,294,967,296↓4 =

340,282,366,920,938,463,463,374,607,431,768,211,456 (4x10-1 digits)

By now the pattern should be clear. Each new term will be the 4th power of the previous term. For 5↓↓n, each term is the 5th power of the previous term, and so on. Now that we have a basic understanding for how weak-tetration works, can we come up with a general formula for computing any value? In fact this turns out to be quite easy. Simply recall the exponential law:

(bn)m = bnm

Now recall that:

b↓↓p = ((...((bb)b)b) ... )b)b w/p b's

Combining these definitions we find that:

b↓↓p = ((...((bb)b)b) ... )b)b w/p b's = b(b*b*b* ... *b*b*b w/p-1 b's) =

bb(p-1)

This simple formula shows that weak-tetration grows hyper-exponentially. This means it grows faster than ordinary exponential functions, but slower than tetration. To show that it grows faster than any exponential function, imagine an exponential function with large base M, and an advantage, C:

exp(p) = M^(p+C)

Now imagine a weak-tetrational function with base-b where b>1:

w(p) = b^b^(p-1)

We can convert M to base-b using logarithms:

Mp+C = (blogbM)(p+C) = b^[logbM(p+C)]

Since both exp(p) and w(p) are now expressed in terms of base-b, we can safely ignore the base and consider only the exponents. Does there exist a value, v, such that:

logbM(p+C) < b^(p-1)

for all p > v. There must be, because logbM(p+C) is a linear equation, where b^(p-1) is an exponential equation. So eventually it will overtake it no matter how large M and C might be. Thus we can say:

w(p) is in a higher growth class than exp(p).

w(p) however must be slower than tetrational growth. Let:

tet(p) = c^^p

where "c" is a small base such that c > e^e^(-1) ~ 1.4446678. The reason "c" has to be greater than this value is because when the base of iterated exponentiation is less than this value, the power tower converges to some value less than or equal to e. Because such a function has a limit or ceiling value, it's growth rate would be equivalent to a horizontal line. In otherwords, it's growth would be zero. Any tetration function with base greater than e^e^(-1) however will grow tetrationally, although the values won't really take off until it passes e. (we'll explore more about power towers in Chapter 4 of this section). None the less, even if the function takes a long time to reach e, it will reach it in a finite amount of steps and will then quickly overtake w(p) no matter how large the base-b is.

To prove this assume the base equals some large number M, and w(p) is given an advantage, A. Then we have:

w(p+A) = M^M^(p-1+A)

Even though M is large, we can still change the base to b:

M^M^(p-1+A) = (c^logcM)^M^(p-1+A) = c^(logcM*M^(p-1+A)) =

c^(c^(logclogcM)*c^[logcM(p-1+A)]) =

c^c^[logcM(p-1+A)+logclogcM]

Note that when p > 2, tet(p) = c^c^tet(p-2). Thus we can eliminate the first two bases, and try to see if tet(p-2) ever surpasses logcM(p-1+A)+logclogcM. Although this seems complicated, everything here is a constant except p. Therefore this is again just a linear equation. The slope of this linear equation would be logcM.

Since it's a linear equation, it must grow slower than any exp function. Let:

exp(p) = c^p

To prove that tet(p) grows faster than w(p) we now simply have to show that tet(p-2) grows faster than c^c^exp(p) = c^c^c^p. Again when p>5, tet(p-2) = c^c^c^tet(p-5), so we can drop the first three bases and we just have to show that tet(p-5) grows faster than p. To prove this we simply need to show that tet(p-5) grows faster than a linear equation, even though "c" may be very close to e^e^(-1). We can do that by showing that the differences between terms