SSC2 [Split from "I live!"]

Discussion of tapertopes, uniform polytopes, and other shapes with flat hypercells.

Re: SSC2 [Split from "I live!"]

Postby Deedlit » Wed Jan 04, 2012 5:54 pm

Sorry for the late reply, it's been a busy holiday season!

quickfur wrote:
However, the notion of transfinite ordinal, in a sense, is "cheating" - we are jumping up to the transfinite and then coming back down to the finite.


Hmmm, I disagree that ordinals are "cheating". The only reason ordinals are considered "transfinite" is because their ordering is more complicated than that of the integers. But the same is true of many sets that we consider "finite"! Take, for example, ordered pairs of natural numbers; under their natural ordering, they have the same order type as omega^2. Similarly finite tuples of natural numbers have order type omega^omega. So taking a function f(omega^n a_n + omega^(n-1) a_(n-1) + ... + a_0) is no different from taking a function f(a_n, a_(n-1), ..., a_0), and indeed, they can be matched up 1-1. And this ordering is in no way incidental: to show that Bowers linear arrays terminate, you must use the fact that finite tuples are well-ordered under their natural ordering.

quickfur wrote:
[...] As for Bowers' notation, his original array notation is equivalent to the Wainer hierarchy below the ordinal omega^omega. His extended array notation in n dimensions is equivalent to the Wainer hierarchy below the ordinal omega^omega^n.

Actually, this is what I had thought too, when I first examined his notation. In fact, I developed what I thought was a superior notation with my "exploding tree function", that spans functions indexed below the Fefermann-Schütte ordinal Gamma_0. However, after some correspondence with him and a more careful analysis of his notation, I discovered that my function attains to only 5-element arrays.


I took a look at your exploding tree function. It is quite interesting, but unfortunately, it does not span anywhere near the functions indexed below Gamma0. It is at the level omega+1.

We agree that, starting from a tree that goes one to the left and then m nodes to the right, the final tree will be a right-branching tree of length about A(m), where A is the Ackermann function. So what about a tree that goes two nodes to the left and then m nodes to the right? Well, your transformation rule only applies to subtrees that are double right-branching chains, so the same thing happens to the left subtree as happened in our previous example: you get a right-branching chain of length A(m). So the full tree will be a double right-branching chain with A(m) left nodes, which, by the previous argument, transforms into a right-branching chain of length about A(A(m)) nodes. Similarly, a tree that goes three nodes to the left and then m nodes to the right will transform into a right-branching tree of length about A(A(A(m))) nodes, and a tree that goes m nodes to the left and then m nodes to the right will transform into a right-branching tree of length about Am(m). This is the omega + 1st level of the Wainer hierarchy.

So, while Bowers' linear arrays surpass your ETF, (in fact {a,b,1,2} is enough), that in no way implies that Bowers' linear arrays surpass Gamma0.


quickfur wrote:Looks are deceiving, because it's tempting to collapse his array notation into "one element = one diagonalization", which is actually far from the truth. Although Bowers' linear arrays are written in a linear form, they do not "scale" in a linear way at all. Every element added to a linear array represents a gigantic leap in the degree of diagonalization.


Actually, Bowers linear arrays match up very nicely with the Wainer hierarchy below omega^omega.

{a,b,1} = a+b

{a,1,2} = a
{a,2,2} = {a,{a,1,2},1} = {a,a,1} = 2a
{a,b,2} = ab

{a,1,3} = a
{a,2,3} = {a,{a,1,3},2} = {a,a,2} = a^2
{a,b,3} = a^b

{a,1,4} = a
{a,2,4} = {a,{a,1,4},3} = {a,a,3} = a^a
{a,b,4} = a^^b

{a,b,c} = a {c-2} b, where {c-2} stands for c-2 Knuth up-arrows

F0(x) = x+2 (I'll use x+2 to match up better with the linear arrays with base 3)
F1(x) = F0x(x) = 3x
F2(x) = F1x(x) = 3^x * x > 3^x
F3(x) = F2x(x) > 3^3^...^3^x > 3^^(x+1)
F4(x) = F3x(x) > 3^^3^^...^^3^^x > 3^^^(x+1)
Fn(x) > 3{c-1}(x+1)

So Fn(x) > {3,x+1,n+1}

{a,1,1,2} = a
{a,2,1,2} = {a,a,{a,1,1,2},1} = {a,a,a} = a {a-2} a
{a,3,1,2} = {a,a,{a,2,1,2},1} = {a,a,{a,a,a}} = {a,a,a {a-2} a} = a {a {a-2} a - 2} a
{a,n,1,2} = a_n, where a_1 = a, a_(n+1) = a {a_n - 2} a.

Fw(x) = Fx(x) > 3 {x-1} x+1 (I will use w as shorthand for omega)
Fw+1(x) = Fwx(x) > b_n where b_0 = x, b_1 = 3{b_0 - 1}(b_0 + 1)

In particular, for x >= 3, Fw+1(x) > {3,x+1,1,2}

More generally, for x >= 3, a>= 0, and b>= 1, Fw*a + b(x) > {3,x+1,b,a+1}

Proof by induction:

a=0, b=1: F1(x) = 3x > x+4 = {3,x+1,1,1}

a=0, b>1:

Fb-1(x) > {3,x+1,b-1} > {3,3,b-1} = {3,{3,1,b},b-1} = {3,2,b}
Fb-12(x) > {3,{3,2,b},b-1} = {3,3,b}
...
Fb-1x(x) > {3,{3,x,b},b-1} = {3,x+1,b}

So Fb(x) > {3,x+1,b}

a>0, b=1:

Fw*a(x) = Fw*(a-1)+x(x) > {3,x+1,x,a} > {3,3,3,a} = {3,3,{3,1,1,a+1},a} = {3,2,1,a+1}
Fw*a2(x) = Fw*aFw*(a-1)+x(x) > Fw*a({3,2,1,a+1}) > {3,{3,2,1,a+1}+1,{3,2,1,a+1},a} > {3,3,{3,2,1,a+1},a} = {3,3,1,a}
...
Fw*ax(x) > {3,{3,x,1,a+1}+1,{3,x,1,a+1},a} > {3,3,{3,x,1,a+1},a} = {3,x+1,1,a+1}

So Fw*a + 1(x) > {3,x+1,1,a+1}

a>0, b>1

Fw*a + b-1(x) > {3,x+1,b-1,a+1} > {3,3,b-1,a+1} = {3,{3,1,b,a+1},b-1,a+1} = {3,2,b,a+1}
Fw*a + b-12(x) > {3,{3,2,b,a+1},b-1,a+1} = {3,3,b,a+1}
...
Fw*a + b-1x(x) > {3,{3,x,b,a+1},b-1,a+1} = {3,x+1,b,a+1}

QED

I'll do one more proof, for five-element arrays, and you'll see how similar it is; it can easily be extended to arrays of length 6 or more.

For x >= 3, a >= 1, b,c >= 0, Fw^2 * c + w * b + a (x) > {3, x+1, a, b+1, c+1}

Proof by induction again.

c=0: proof shown above.

a > 1:

Fw^2 * c + w * b + a-1 (x) > {3, x+1, a-1, b+1, c+1} > {3, {3, 1, a, b+1, c+1}, a-1, b+1, c+1} = {3, 2, a, b+1, c+1}
Fw^2 * c + w * b + a-12 (x) > {3, {3, 2, a, b+1, c+1}, a-1, b+1, c+1} = {3, 3, a, b+1, c+1}
...
Fw^2 * c + w * b + a-1x (x) > {3, {3, x, a, b+1, c+1}, a-1, b+1, c+1} = {3, x+1, a, b+1, c+1}

So Fw^2 * c + w * b + a (x) > {3, x+1, a, b+1, c+1}

a = 1, b > 0:

Fw^2 * c + w * b (x) = Fw^2 * c + w * (b-1) + x (x) > {3, x+1, x, b, c+1} > {3, 3, 3, b, c+1} = {3, 3, {3, 1, 1, b+1, c+1}, b, c+1} = {3, 2, 1, b+1, c+1}
Fw^2 * c + w * b2 (x) > Fw^2 * c + w * b ({3, 2, 1, b+1, c+1}) > {3, {3, 2, 1, b+1, c+1}, {3, 2, 1, b+1, c+1}, b, c+1} > {3, 3, {3, 2, 1, b+1, c+1}, b, c+1} = {3, 3, 1, b+1, c+1}
...
Fw^2 * c + w * bx (x) > Fw^2 * c + w * b ({3, x, 1, b+1, c+1}) > {3, {3, x, 1, b+1, c+1}, {3, x, 1, b+1, c+1}, b, c+1} > {3, 3, {3, x, 1, b+1, c+1}, b, c+1} = {3, x+1, 1, b+1, c+1}

So Fw^2 * c + w * b + 1 (x) = {3, x+1, 1, b+1, c+1}

a=1, b=0, c>0:

Fw^2 * c (x) = Fw^2 * (c-1) + w * x (x) = Fw^2 * (c-1) + w * (x-1) + x (x) > {3, x+1, x, x, c} > {3, 3, 3, 3, c} = {3, 3, 3, {3, 1, 1, 1, c+1}, c} = {3, 2, 1, 1, c+1}
Fw^2 * c2 (x) > Fw^2 * c ({3, 2, 1, 1, c+1}) > {3, {3, 2, 1, 1, c+1} + 1, {3, 2, 1, 1, c+1}, {3, 2, 1, 1, c+1}, c} > {3, 3, 3, {3, 2, 1, 1, c+1}, c} = {3, 3, 1, 1, c+1}
...
Fw^2 * cx (x) > Fw^2 * c ({3, x, 1, 1, c+1}) > {3, {3, x, 1, 1, c+1} + 1, {3, x, 1, 1, c+1}, {3, x, 1, 1, c+1}, c} > {3, 3, 3, {3, x, 1, 1, c+1}, c} = {3, x+1, 1, 1, c+1}

QED

Whew! So five element arrays are bounded above by quadratic polynomials over omega in the Wainer hierarchy. Similarly, six element arrays are bounded above by cubic polynomials over omega, and n-element arrays are bounded above by polynomials of degree n-3 over omega.

quickfur wrote:And this is just linear arrays. Even a 2D array with a single element in the second row represents the effect of linear arrays multiplied N-fold, where N is the value of an arbitrary linear array. In other words, the second row does not represent a simple extension of the first row; it takes the value of a linear array (which in itself is a gigantic number) and uses that value to derive the length of the linear array produced by decrementing the second row element by 1. Every decrement of the second row element applies a linear array function to the length (not the value!) of the linear array in the result. And when the second row has multiple elements, this kind of feedback diagonalization is simply unimaginable.

Actually, it is more than that; when first number in the second row is a 2, and all the other numbers are 1's, and the first row is {b, p, 2}, then every decrement of the prime element, not the second row element, applies the linear array function to the length of the linear array. Increasing the second row element does much more than applying the linear array function to the length of the linear array.

But the Wainer hierarchy works the same way! The second row in Bowers' array notation corresponds to the coefficients of omega^(omega + n) as n ranges over nonnegative integers. Take, for example, a = omega^omega + 1. We have already shown that Fomega^n(x) corresponds to a Bowers array of length n+3, so Fomega^omega (x) takes x to a Bowers array of length x+3. Fomega^omega + 1 (x), then, applies the function (x -> Bowers array of length x+3) x times! So Fw^w + a (x) > {3, x+1, a+1 [1] 2}. More generally, having a term w^(w*a + b) iab in the index of the Wainer function corresponds to having a coefficient of iab + 1 in the b+1st element of the a+1st row. (except when a= 0, then it is the b+3rd element of the first row) Taking it to the third dimension, a term w^(w^2 * a + w * b + c) iabc in the index of the Wainer function corresponds to having a coefficient of iabc + 1 in the c+1st element of the b+1st row of the c+1st plane. Similarly, w^(w^n * a) will correspond to Bowers' array notation extended to n+1 dimensions, with an a+1 in the first element of the second n-hyperplane.

And since 5-element arrays already reach Gamma_0, it's conceivable that somewhere in the n-dimensional array structure the small Veblen ordinal must have been passed, so Bower's notation is well on the way to the large Veblen ordinal.


Given the above proof that 5-element arrays go up to only omega^3, does that change your opinion on how far Bowers' notation goes?

quickfur wrote:First, I would say that his notation definitely surpasses Gamma_0 by far. The thing to note about his system is that it doesn't map very well to traditional ordinal notations because what he's diagonalizing is not the function (or arithmetic operator, if you like to think of it that way), but rather the degree of diagonalization. Consider a generic binary operator #. Given a#b, where a and b are two numbers (for our purposes let them be natural numbers), there are three ways to create a new operator @ by diagonalization:

1) Left-diagonalize: a@b = (((a#a)#a)#a ... )#a (b occurrences of a). This is not very effective; when # is exponentiation, for example, @ is merely multiplying the exponent by b.

2) Right-diagonalize: a@b = a#(a#(a#...#a))) (b occurrences of a). This is more powerful; it takes exponentiation to tetration, and tetration to pentation, etc..

3) "Middle"-diagonalize: let n be the degree of #, defined as the index into the sequence exponentiation, tetration, pentation, ... etc.. Then we construct the new operator @ by evaluating a#a and plugging the result into the degree of the resulting operator. In other words, we're not merely going from exponentiation to tetration; we're making a quantum leap from exponentiation to n-ation where n = a#a. But where does b come into the picture? It represents the level of nesting in the evaluation of the degree of the resulting operation. That is, if b=3, what we do is evaluate n1=a#a, then construct an operator of degree n1, let's call it {#1}, then evaluate n2=a{#1}a, then construct an operator of degree n2, call it {#2}, then evaluate n3=a{#2}a, then construct an operator of degree n3, call it {#3}, then evaluate a{#3}a. So we're not merely diagonalizing function composition; we're diagonalizing the number of diagonalizations applied to each level of function composition. This is the essence of what Bowers is doing.

Let's take a concrete example. To make things easier to read, let's adopt the convention that operators will be denoted by braces {} in infix notation. For the base case, let {1} = exponentiation, so a{1}b = a^b. For each level in the Grzegorczyk hierarchy, we will increment the number inside the brace, so {2} = tetration, {3} = pentation, etc.. Now, consider a 3-element array in Bowers' notation: <3 4 5>. First off, this does not mean 3{4}5 at all!! What it means is you first evaluate 3^5 = 243, and create the operator {243}. Use that operator to evaluate n1=3{243}5. This gives a truly huge number -- remember that {243} is the 243'rd function in the Grzegorczyk hierarchy. Not mere heptation, but already far beyond it. Now create the operator {n1} and evaluate n2=3{n1}5. This is an even huger number, being created by the (n1)'th function in the Grzegorczyk hierarchy. Now create the operator {n2}, etc.. Repeat this process a total of 4 times, as indicated by the 2nd element in the 3-element array. The final value of the array, then, is 3{n4}5.


Actually, {3, 4, 5} = 3 {3} 4, as we showed above. It is true that we get iterations of the Knuth operator with four-element arrays - {a,b,1,2} iterates the Knuth operator b times. But the same thing happens with the Wainer hierarchy! Noting that Fomega(x) = Fx(x) > 3 {x-1} (x+1), we see that Fomega + 1(x) = Fomegax (x) iterates the Knuth operator x times. So for example, Fomega + 1(6) > 3 {n4} 5 in your example above, as

Fomega (6) = F6(6) >> 243
Fomega Fomega (6) = Fomega F6(6) > 3 {F6 (6) - 1} (F6 (6) + 1 >> 3 {243} 5 = n1
Fomega3(6) > 3 {Fomega2(6) - 1} (Fomega2(6) + 1) >> 3 {n1} 5 = n2
...
Fomega6(6) > 3 {Fomega5(6) - 1} (Fomega5(6) + 1) >> 3 {n4} 5

For another example, consider Graham's number: This is defined by G_1 = 3 {4} 3, and G_(n+1) = 3 {G_n} 3. Graham's number is G_64.

Fomega(5) = F5(5) > 3 {4} 6 > 3 {4} 3 = G_1
FomegaFomega(5) >= 3 {G_1} (G_1 + 2) > 3 {G_1} 3 = G_2
...
Fomega64 (5) >= 3 {G_63} (G_63+2} > 3 {G_63} 3 = G_64

So Fomega + 1(64) > Fomega64 (5) > G_64 = Graham's number.

As you can see, once we diagonalize at Fomega(x), the value of x now applies to the "middle" operator, so when we go to Fomega + 1 (x) we are iterating not only on the right but in the middle as well!

Then, of course, we have Fomega + 2(x); this is iterating the above function, which is itself iterating the middle and right numbers in 3 {a} b. And then Fomega + 3(x) iterates over that function. Clearly, we can no longer represent the numbers we are getting with Knuth arrow notation.

But let's contract it back to right diagonalization again. Let a [0] b = a {b} b. Let a [1] b = a [0] (a [0] ... (a [0] a)...) with b a's. So even at [1], we get the middle diagonalization you describe above. Let a [n+1] b = a [n] (a [n] ... (a [n] a) ... ) with b a's. Then we have Fomega + n (x) ~ 3 [n] x+1. So we have right diagonalization with this notation, even though it is MUCH faster growing than the middle diagonalization with Knuth notation. Now what happens at Fomega*2 + 1 (x)? We get

Fomega*2(x) = Fomega + x (x) = 3 [x] x+1
Fomega*2Fomega*2(x) ~ Fomega*2(3 [x] x+1) ~ 3 [3 [x] x+1] ((3 [x] x+1) + 1)
...
Fomega*2 + 1 (x) = Fomega*2x(x) ~ a_x where a_0 = x and a_(n+1) = 3 [a_n] (a_n + 1)

So in this new notation where we contracted middle diagonalization to right diagonalization, we are back to middle diagonalization again! And this "middle diagonalization" is inconceivably more powerful than the previous version of middle diagonalization with Knuth operators. And the same thing happens when we go to omega*3, and then omega*4, etc. Then, at omega^2 + 1, we get a function that iterates the function (x -> Fomega*x) x times! So we have a totally new form of diagonalization. This kind of increasingly powerful diagonalizations continue as we go up the ordinals, just as it does with Bowers' notation, except we can go MUCH further.

So let's take a look at what's happening here. Consider the array <3 1 n>. The second element being 1 means a single level of nesting, so first we evaluate 3^n, then look up the (3^n)'th function in the Grzegorczyk hierarchy and use that to evaluate 3{3^n}n. As n varies, we go up the Grzegorczyk hierachy in exponential steps. In other words, this simple array already transcends the Ackermann function. But this is only a single level of nesting. If the middle element were 2, for example, we first compute 3{3^n}n, then use that to find the next operator in the G hierarchy, then apply that operator to get our final result. At this point, we're climbing the G hierarchy not exponentially, not tetrationally, but at the speed of the (3^n)'th function of the G hierarchy. This is already far beyond any sort of diagonalization based on the Ackermann function, and this just by a mere increment of the second element of the array. Remember that each level of the G hierarchy represents a level of diagonalization, so we're abstracting away the process of diagonalization and making the number of diagonalizations grow at an incredible speed.


Hmm, I think the trouble here is you are focusing on the Wainer hierarchy on finite ordinals. Clearly Fomega+1(n) far surpasses the functions you describe above.

The Wainer hierarchy is based on right-diagonalization, that is, at each successor ordinal you're just making n recursive substitutions into the right operand of the operator in question. What Bowers has done is not to mess with the operands, but what amounts to messing with the ordinal index itself. So he's not merely going up the Wainer hierarchy in baby steps; he's leaping up with ever larger steps, each one ridiculously larger than the one before. The fact that he does this without even a notion of what an ordinal is, is something truly commendable. His only "fault" is that his scheme is not easily mapped to the ordinal indices produced by right-diagonalization (i.e. diagonalized function composition), which makes it easy for people to misunderstand what he's actually doing.


Actually, the scheme maps very easily to ordinal indices, as I proved above.
Deedlit
Mononian
 
Posts: 5
Joined: Sun Dec 25, 2011 10:58 am

Re: SSC2 [Split from "I live!"]

Postby quickfur » Wed Jan 04, 2012 8:28 pm

Deedlit wrote:[...] Hmmm, I disagree that ordinals are "cheating". The only reason ordinals are considered "transfinite" is because their ordering is more complicated than that of the integers. But the same is true of many sets that we consider "finite"! Take, for example, ordered pairs of natural numbers; under their natural ordering, they have the same order type as omega^2. Similarly finite tuples of natural numbers have order type omega^omega. So taking a function f(omega^n a_n + omega^(n-1) a_(n-1) + ... + a_0) is no different from taking a function f(a_n, a_(n-1), ..., a_0), and indeed, they can be matched up 1-1. And this ordering is in no way incidental: to show that Bowers linear arrays terminate, you must use the fact that finite tuples are well-ordered under their natural ordering. [...]

Point taken, though I have to nitpick that ordered pairs of natural numbers aren't at all finite, since the set of ordered pairs (or any tuple for that matter) has cardinality aleph_0.

[...] I took a look at your exploding tree function. It is quite interesting, but unfortunately, it does not span anywhere near the functions indexed below Gamma0. It is at the level omega+1.

We agree that, starting from a tree that goes one to the left and then m nodes to the right, the final tree will be a right-branching tree of length about A(m), where A is the Ackermann function. So what about a tree that goes two nodes to the left and then m nodes to the right? Well, your transformation rule only applies to subtrees that are double right-branching chains, so the same thing happens to the left subtree as happened in our previous example: you get a right-branching chain of length A(m). So the full tree will be a double right-branching chain with A(m) left nodes, which, by the previous argument, transforms into a right-branching chain of length about A(A(m)) nodes. Similarly, a tree that goes three nodes to the left and then m nodes to the right will transform into a right-branching tree of length about A(A(A(m))) nodes, and a tree that goes m nodes to the left and then m nodes to the right will transform into a right-branching tree of length about Am(m). This is the omega + 1st level of the Wainer hierarchy.
[...]

Hmmm. I haven't looked more closely at this yet, but it appears that I have made a mistake in my definition of the ETF, which causes it to be weaker than I had intended it to be. It's quite possible that I was misled regarding Bowers' notation because of my mistaken assumptions about the growth rate of ETF.

Basically, ETF was inspired by the Veblen hierarchy. The idea was to map Veblen ordinals to the finite realm by, roughly speaking, substituting finite values for omega into a given ordinal. The connection with binary trees come from the observation that the magnitude of Veblen ordinals is dominated by the subscript to the phi function, and that the subscripts themselves can be expressed as Veblen ordinals. If, therefore, we "expand" the subscript of a given Veblen function in terms of smaller Veblen functions, we can, at least for the major landmarks in the Veblen hierarchy, construct an expression tree consisting of phi's and 0. The fixed point Gamma_0, then, basically corresponds with a tree with an infinite chain of phi's.

The tree transformation rule, which appears to be the cause of the unintended weakness of ETF, was supposed to mimic the effect of ordinal collapse when one truncates the fundamental sequence of a given Veblen ordinal at a finite height. (So yes, I "cheated" by my own evaluation, I'm riding on the ordinal hierarchy and dropping into the finite realm to get very large numbers (fast-growing functions).) The underlying premise was basically to collapse the Veblen ordinals into finite numbers by truncating each fundamental sequence at finite height, thereby eliminating all the omega's.

So really, the original intention was essentially the same as the Wainer hierarchy (at least up to Gamma_0, anyway), except that I wanted to express it in a form that does not explicitly require ordinals, since it is not immediately obvious how one might go about computing a function subscripted by, say, Gamma_0. I wanted to see if it was possible to express what amounts to f_{Gamma_0} in the Wainer hierarchy in terms of elementary operations.

That was when I first learned about the Veblen hierarchy. Since then, I have learned about Veblen's own generalizations up to the large Veblen ordinal, and have considered the possibility of extending ETF up to the large Veblen ordinal by using a particular kind of n-ary tree. Assuming that the tree transformation rule actually does what I thought it does, of course, which appears to be in question right now. Hmph.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Previous

Return to Other Polytopes

Who is online

Users browsing this forum: No registered users and 26 guests