SSC2 [Split from "I live!"]

Discussion of tapertopes, uniform polytopes, and other shapes with flat hypercells.

SSC2 [Split from "I live!"]

Postby quickfur » Thu Nov 17, 2011 9:13 pm

Hmm, just started looking at SSC2... seems you're missing the demicube family in dimension > 4, and apparently also the Gosset polytopes up to dimension 8 (or 7 if you discount n-space tilings). Those things coincide with cube/cross/prism derivatives in lower dimensions, but have distinct symmetry groups in 5D up to 8D.

I think Jonathan Bowers mentioned to me at one time that there are a few other sporadics in higher dimensions, IIRC 24D or 36D may have some polytopes of unusually high symmetry due to their correspondence with exceptional mathematical objects. Alternatively you could exclude sporadics... but then the icosahedral polytopes may be regarded as sporadic too, since they only go up to 4D.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby quickfur » Thu Nov 17, 2011 9:42 pm

Hmm, you're also missing sporadic uniform polytopes like the snub cube, snub dodecahedron, snub 24-cell, grand antiprism.

The snub cube and snub dodecahedron can be obtained, topologically, by alternating the great rhombicuboctahedron and the great rhombicosidodecahedron, respectively. The snub 24-cell can be obtained, topologically, by alternating the truncated 24-cell.

The grand antiprism is actually part of a family of duocylinder-like antiprismic polytopes, and is the only uniform member. Basically, you can take 2m copies of any n-gonal antiprism, make two rings of them with m members each, and connect the rings with tetrahedra. Most of the time you will not get a uniform polytope, and there will be two distinct kinds of tetrahedra, but the shape will close up.

A somewhat related family is the alternated 2m,2n-duoprisms. These have almost the same structure, except that the antiprismic cells from either ring are not completely separated; they do share vertices with each other, with tetrahedra inserted between them. I have yet to verify this, but I believe the grand antiprism family might arise from alternating the runcinations of the 2n,2n-duoprisms.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby Keiji » Thu Nov 17, 2011 10:13 pm

quickfur wrote:Hmm, you're also missing sporadic uniform polytopes like the snub cube, snub dodecahedron, snub 24-cell, grand antiprism.


They are defined as Ko0, Ki0 and Kk0 respectively. The Grand Antiprism does not have any SSC2, though (unless it's an alternation of some sort).

I do believe I was working on an SSC3 at some point. I forgot what I was doing with it, though...
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: I live!

Postby quickfur » Thu Nov 17, 2011 10:25 pm

Oh really? I would've expected Ko0 to be just a single point. :) But I guess might as well reuse it for something useful. :)

What about distinguishing between binary and decimal notation for the truncates? I find something like Ks0011 to be a lot more readable than Ks3, since to understand what Ks3 is you have to mentally expand the 3 into binary first, and then map it to the CD diagram. (In my own notation I just use a single letter for the symmetry family, and let the number of digits in the binary number specify the dimension, so Ks3 would be H0011, Ko6 would be C110, Ke12 would be C1100, etc..)

Keiji wrote:I do believe I was working on an SSC3 at some point. I forgot what I was doing with it, though...

lol...
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby Keiji » Thu Nov 17, 2011 10:38 pm

quickfur wrote:Oh really? I would've expected Ko0 to be just a single point. :) But I guess might as well reuse it for something useful. :)


I'm not the first person to use Dx 0 for snubs.

What about distinguishing between binary and decimal notation for the truncates?


Binary is too long-winded. SSC2 wasn't designed to be readable; it was designed to be compact yet well-defined.

Keiji wrote:I do believe I was working on an SSC3 at some point. I forgot what I was doing with it, though...


I had a look around at some old files, and reminded myself what happened: I played around with several different ideas, none of which managed to include the rhodomorphs, and then I found out about incidence matrixes and abandoned all that, intending to come back to it when I'd figured out an algorithm to apply the brick product operation to incidence matrices and give the incidence matrix for the result.

I still haven't figured out such an algorithm...

I also put some work into extending incidence matrices to curved shapes by associating a boolean property with each facet type. Although I did manage to get a number of good examples, I remember hitting an ambiguity and then abandoning the project. It would be nice to get something working on that front, too, because that combined with a generalized brick product algorithm (since the brick product works for curved shapes too) would give us one combined yet well-defined representation and operation for ALL convex shapes: CRF polytopes, toratopes, tapertopes, bracketopes, and so on.

By the way, in case you forgot/didn't know in the first place, the brick product is a generalized product which allows making prisms (including hypercubes), duoprisms, tegums (including cross polytopes), crinds (including hyperspheres), powertopes, and all sorts of other things. Pretty much the only convex shapes it can't generate are pyramids and the various non-hypercube-related uniform polytopes.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: I live!

Postby quickfur » Thu Nov 17, 2011 11:41 pm

One thing to keep in mind if you're going to start using abstract polytopes for representation: many abstract polytopes (I would even venture to say most) cannot be realized geometrically at all. Some are "normal" geometric polytopes, some are toroidal tilings (which amounts to toroidal polytopes, although you have to keep in mind that not all of them can be realized as polytopes with flat surcells, for example the regular abstract polytope with 3 hexagonal faces---yes, it's regular) and tilings of other spaces of non-zero genus, some are tiling of projective n-spaces (e.g. Coxeter's 11-cell and 57-cell), some are complex polytopes (i.e., where vector components are complex numbers -- read this to get some idea about how strange these are), and some are plain impossible to describe in any sort of space whatsoever, and the only thing meaningful you can do with them is to manipulate their poset structure.

What's the definition of a brick product? I've heard it before but never really understood what it was. I tried searching in the wiki but couldn't find an actual definition.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby Keiji » Thu Nov 17, 2011 11:59 pm

quickfur wrote:One thing to keep in mind if you're going to start using abstract polytopes for representation: many abstract polytopes cannot be realized geometrically at all.


This doesn't bother me: the ASCII string "5Gxv{3" doesn't represent a valid shape in SSC2, but each individual symbol does have a meaning in the notation. Similarly, there are a lot of symmetric matrices which don't represent valid shapes.

What's the definition of a brick product? I've heard it before but never really understood what it was. I tried searching in the wiki but couldn't find an actual definition.


The brick product is written P{A^i, B^j, C^k, ...} where P is a brick, other capital letters are shapes and lowercase letters are natural numbers which act as exponents. The exponents can be omitted, and using an exponent is equivalent to repeating the shape in the sequence that many times. The number of dimensions of P must be equal to the number of individual shapes inside the braces (once exponents are resolved to multiple copies of shapes).

A brick is simply a shape with brick symmetry. Examples of what shapes count as bricks can be found here.

I do not have a formal definition of the result of the brick product for arbitrary P - this is something I have been searching for for a long time. However, we can go by way of example:

* If P is a hypercube with the appropriate dimension, the result is the Cartesian product.
* If P is a hypersphere with the appropriate dimension, the result is the crind product (RSS).
* If P is a cross polytope with the appropriate dimension, the result is the tegum product.

Powertopes are often written in the form A^P. As a brick product, this is written P{A^n}, where P is n-dimensional. For example, the Hexagonal octagoltriate, hexagon^octagon, can be written as octagon{hexagon^2}.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: I live!

Postby quickfur » Fri Nov 18, 2011 12:33 am

Well, the examples you gave are already in the wiki. But they don't really tell you what happens when P is something else, say a cuboid or a rhombicuboctahedron. Do you know of any other cases that might be useful in understanding what exactly it does?
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby Keiji » Fri Nov 18, 2011 12:42 am

I'll be able to give you concrete examples of what happens when P is Ko5 and Ko3 after a good sleep and setting my mind to it.

Also, I just recalled that both P and the other shapes can be non-convex, but I'm not sure if there's any additional restrictions for non-convex shapes. I like to keep things restricted to convex only for simplicity most of the time anyway.

For P being a cuboid, the result is topologically identical to when P is a cube - the various components are just stretched according to how the cuboid was stretched from a cube.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: I live!

Postby quickfur » Fri Nov 18, 2011 12:43 am

Also, is a hexagon considered to have brick symmetry? Because its coordinates can be written (0,±a), (±b,±c) where c<a. Why are the octogoltriates the "simplest" powertopes?
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby Keiji » Fri Nov 18, 2011 1:17 am

quickfur wrote:Also, is a hexagon considered to have brick symmetry? Because its coordinates can be written (0,±a), (±b,±c) where c<a.


Yes, a hexagon is a brick.

Why are the octogoltriates the "simplest" powertopes?


Because I wrote that before realizing that hexagons were bricks. :P

However, although octagoltriates have more facets than hexagoltriates, I think they're more aesthetically pleasing - there is a symmetry between the two subordinate shapes (in that swapping them still obtains a result congruent to the original result) that hexagoltriates do not have. i.e. octagon{hexagon, decagon} is congruent to octagon{decagon, hexagon}, but hexagon{octagon, decagon} is not congruent to hexagon{decagon, octagon}.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: I live!

Postby quickfur » Fri Nov 18, 2011 1:20 am

Keiji wrote:
quickfur wrote:One thing to keep in mind if you're going to start using abstract polytopes for representation: many abstract polytopes cannot be realized geometrically at all.


This doesn't bother me: the ASCII string "5Gxv{3" doesn't represent a valid shape in SSC2, but each individual symbol does have a meaning in the notation. Similarly, there are a lot of symmetric matrices which don't represent valid shapes.

The problem is that for abstract polytopes, it's non-trivial to compute, given an incidence matrix, whether it is embeddable in some kind of space. Invalid SSC2 strings are easy to see; but as far as I know, there's no algorithm for deciding whether some given abstract polytope is embeddable. You could potentially end up working out all sorts of properties of some object, only to discover months down the road that it actually is non-embeddable.

Keiji wrote:[...]
I'm not the first person to use Dx 0 for snubs.

You could assign Ks0 to be the grand antiprism, since it's remotely related to the 600-cell (having a subset of the 600-cell's vertices), and there are no snub-like uniform members of the 600-cell family.

Keiji wrote:[...]
Because I wrote that before realizing that hexagons were bricks. :P

Actually, all even polygons have brick symmetry, now that I think about it. I think the wiki says that they must be powers of 2, but that's wrong. :)
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby Keiji » Fri Nov 18, 2011 1:28 am

If you are given an incidence matrix, that begs the question of how the incidence matrix was derived. If it's just a random sequence of numbers, then of course it's unlikely to be a valid shape. But otherwise, it would likely have either been calculated for a particular shape, or an algorithm will have computed it from the incidence matrices of simpler shapes; in both of these cases, you already know it's a valid shape.

Also, if you really think invalid SSC2 is easy to see, consider the following strings:
1a. "G4{G6,G4}{G3,G10}"
1b. "G4{G3,G4}{G6,G10}"
2a. "Ke3{G4,G6,G10,G22}"
2b. "Kp3{G4,G6,G10,G22}"
2c. "Ko3{G4,G6,G10,G22}"

Of these, 1a and 2a are valid and the others are invalid. It's not immediately obvious why, even though it can be determined by an algorithm.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: I live!

Postby Keiji » Fri Nov 18, 2011 1:31 am

quickfur wrote:
Keiji wrote:I'm not the first person to use Dx 0 for snubs.

You could assign Ks0 to be the grand antiprism, since it's remotely related to the 600-cell (having a subset of the 600-cell's vertices), and there are no snub-like uniform members of the 600-cell family.


Yes, I suppose so.

quickfur wrote:
Keiji wrote:Because I wrote that before realizing that hexagons were bricks. :P

Actually, all even polygons have brick symmetry, now that I think about it. I think the wiki says that they must be powers of 2, but that's wrong. :)


I've just corrected it.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: I live!

Postby quickfur » Fri Nov 18, 2011 1:56 am

OK, first, an incidence matrix has binary entries. :) So you can interpret it as a bitmap.

Secondly, I'm wrong (somewhat) about the embeddability problem. Every abstract polytope with n vertices can be embedded as an (n-1)-simplex. However, this simplex, in general, will be missing some surtopes, and so will not be a "faithful" representation of the abstract polytope. (For example, a 600-cell can be represented as a 119-dimensional simplex missing some subset of its surtopes... but obviously such a representation is not "faithful" to the "true" structure of the 600-cell. For one thing, its dual is representable as a 599-dimensional simplex with some surtopes missing. Their dual relationship is not at all clear under this representation.) Whether a given n-dimensional abstract polytope can be faithfully represented in n-space is an open problem, so good luck trying to find an algorithm for it. (And we're not even talking about whether it's embeddable in more exotic spaces like toroids, projective spaces, complex spaces, or something else. Like projective octonian space, maybe. :P)

Thirdly, if you are already deriving the incidence matrix by some other means, then why bother with the matrix representation at all? Wouldn't it be better to use the most convenient representation (presumably the one you used to derive it) instead?
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby Keiji » Fri Nov 18, 2011 2:04 am

quickfur wrote:OK, first, an incidence matrix has binary entries. :) So you can interpret it as a bitmap.


I'm not using the raw incidence matrices, I'm using the "compressed" ones. Here's an example.

if you are already deriving the incidence matrix by some other means, then why bother with the matrix representation at all? Wouldn't it be better to use the most convenient representation (presumably the one you used to derive it) instead?


It gives a generalized notation. The problem with using specific notations is that when you want to add an operator to them, you suddenly have to invent new syntax, and then you discover that you can write something that may or may not be a valid shape, and potentially end up in a big mess like my infamous rotope construction chart. With a generalized notation, you don't have to worry about all that.

Plus, there are some obvious advantages to using the (compressed) incidence matrices: you can read off element counts and facet types very easily (especially if it's annotated like at the example link above), and the dual is very easily calculated. Storing the matrix is also very simple, because you once again don't need to rely on a complex, changing notation.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: I live!

Postby quickfur » Fri Nov 18, 2011 3:10 am

Keiji wrote:
quickfur wrote:[...]
if you are already deriving the incidence matrix by some other means, then why bother with the matrix representation at all? Wouldn't it be better to use the most convenient representation (presumably the one you used to derive it) instead?


It gives a generalized notation. The problem with using specific notations is that when you want to add an operator to them, you suddenly have to invent new syntax, and then you discover that you can write something that may or may not be a valid shape, and potentially end up in a big mess like my infamous rotope construction chart. With a generalized notation, you don't have to worry about all that.

The thing about notations is, they cannot be considered in isolation; you need to also consider what kind of operations you want to do with them. Some notations are good for some operations but not others, and other notations are vice versa.

For example, incidence matrices are ideal if you want to count surtopes, etc.. But they're also a pain in the butt if you want to, for example, find out how they are constructed. I mean, given the incidence matrix for, say, the rhombicuboctahedron, would you be able to tell that the CD diagram is x4o3x? On the other hand, using CD diagrams as notation makes it hard to count surtopes, and not every polytope can be represented that way. Incidence matrices are also a pain when you want to do operations like truncation or expansion, etc. (you have to introduce new surtopes, calculate new ridges, peaks, edges, etc., assign them to a matrix row/column, etc.), whereas a Conway's polyhedron operators lets you do it just by prefixing a letter or two. So it depends on what your goal is. Are you looking for ease of certain operations, conciseness of notation (which incidence matrices are not), ease of comparison, or what?

Plus, doing operations with compressed incidence matrices adds complexity to algorithms because you need to decompress, recompress, etc.. Sure, duals are trivial to calculate. What about runcinations? Omnitruncation? Or coordinates? So you really should set down a list of goals for your notation before deciding how to do it; it will save you a lot of pain later. :)
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: I live!

Postby quickfur » Fri Nov 18, 2011 3:11 am

P.S. An octahedral pyramid is used for ... the first step in constructing a 5-cross. :XD:
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: SSC2 [Split from "I live!"]

Postby Keiji » Fri Nov 18, 2011 11:21 am

quickfur wrote:Plus, doing operations with compressed incidence matrices adds complexity to algorithms because you need to decompress, recompress, etc.


Actually, no it doesn't. The prism and pyramid generating algorithms I discovered work directly on compressed matrices. The compressed matrices are far more readable and easier to work with than the uncompressed ones.

I suppose my goal in using incidence matrices was uniqueness. Each shape should be representable by only one incidence matrix, and each incidence matrix should represent at most one valid shape (in a given embedding). With SSC2, for example, the following all represent a tesseract: Ke8, +Ko4, ++G4, G4 x G4, G4{G4,G4}, G4{G4^2}, G4^G4, Ko4 $ Ko4, (G4 $ G4) $ (G4 $ G4), +(G4 $ G4), (+G4) $ (+G4). That's 11 different representations of the same shape that I came up with off the top of my head. With incidence matrices, if two different expressions evaluate to the same resulting matrix, you know that it's because the two different constructions give you the same shape. There is no way to "evaluate" SSC2 expressions, other than just thinking about and saying "yes, I know the prismatoid between two squares is a cube" and so on.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: SSC2 [Split from "I live!"]

Postby quickfur » Fri Nov 18, 2011 7:12 pm

What about comparing incidence matrices? Given two arbitrary (compressed) matrices, how do you tell whether they represent the same object? Non-symmetrical objects may have their j-faces come out in different ordering, for example. Or am I misunderstanding your compressed notation?
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: SSC2 [Split from "I live!"]

Postby Keiji » Fri Nov 18, 2011 7:22 pm

The compressed notation is as shown on the polytope explorer pages. It's been a while since I touched it, but I believe that swapping rows/columns in the same "group" (rows/columns of the same length) does not change the polytope; thus, one can compare for all such possible permutations, or use a canonical ordering.

On the brick product front, I have discovered that the torus is a valid value for P.

For example, torus{square, digon, digon} gives a 4D shape where the 3D cross-section is a crind with a smaller crind removed from the center, and as you move the cross section, the outer crind becomes smaller and the inner one becomes larger, until they meet at the ridge of the shape. The order of the arguments matters; assuming torus is defined as ((II)I) as opposed to (I(II)), then although torus{square, digon, digon} and torus{digon, square, digon} are the same, torus{digon, digon, square} is totally different, although I haven't yet worked out what this shape would be.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: SSC2 [Split from "I live!"]

Postby quickfur » Fri Nov 18, 2011 7:51 pm

Keiji wrote:[...] thus, one can compare for all such possible permutations, or use a canonical ordering.

And this is where you will have trouble. :) Comparing all possible permutations is infeasible for large polytopes, because the number of permutations is exponential, O(n!).

And canonical ordering may end up being just as expensive (or maybe less, but still significant), because you essentially have to permute the rows/columns into a prespecified order, but because the numbers reference row/column numbers, it's not obvious what exactly is canonical for a given polytope. You may end up having to examine the entire structure of the polytope before you can decide what is canonical, and then you have to go and swap those rows/columns around and renumber things.

On the brick product front, I have discovered that the torus is a valid value for P.

And not just the regular torus, if my understanding is correct, any affine transform of the torus qualifies too. So you can, for example, do brick products with a slant-squished torus with a 0-radius hole, for example. I'm sure that 0-radius hole will cause headaches, if the slant-squish transform doesn't. :evil:

For example, torus{square, digon, digon} gives a 4D shape where the 3D cross-section is a crind with a smaller crind removed from the center, and as you move the cross section, the outer crind becomes smaller and the inner one becomes larger, until they meet at the ridge of the shape. The order of the arguments matters; assuming torus is defined as ((II)I) as opposed to (I(II)), then although torus{square, digon, digon} and torus{digon, square, digon} are the same, torus{digon, digon, square} is totally different, although I haven't yet worked out what this shape would be.

Hmm. I still don't understand how exactly the shape of the torus/square/digon correspond to each other in the product. And I'm still in the dark about why P must have brick symmetry.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: SSC2 [Split from "I live!"]

Postby Keiji » Fri Nov 18, 2011 9:19 pm

quickfur wrote:
Keiji wrote:[...] thus, one can compare for all such possible permutations, or use a canonical ordering.

And this is where you will have trouble. :) Comparing all possible permutations is infeasible for large polytopes, because the number of permutations is exponential, O(n!).


Even though it's exponential, it should still be fast enough for any polytope whose compressed incidence matrix fits on my (1080p) screen! And I can't really think of a reason why we'd want to mess with polytopes more complicated than that, at least in the near future.

the numbers reference row/column numbers


Sorry, what? I'm pretty sure they don't. The numbers inside the matrix have nothing to do with row and column numbers; the information is combined by such methods as picking a starting cell, then reading cells with the same row number as the original cell's column number. Row and column numbers are never combined or compared with the number that actually occupies a cell.

it's not obvious what exactly is canonical for a given polytope. You may end up having to examine the entire structure of the polytope before you can decide what is canonical, and then you have to go and swap those rows/columns around and renumber things.


And in existing notations we already have to do this anyway, so how is it a disadvantage?

On the brick product front, I have discovered that the torus is a valid value for P.

And not just the regular torus, if my understanding is correct, any affine transform of the torus qualifies too. So you can, for example, do brick products with a slant-squished torus with a 0-radius hole, for example. I'm sure that 0-radius hole will cause headaches, if the slant-squish transform doesn't. :evil:


You can stretch it along an axis, yes, but you can't do something like slanting, because that would break the brick symmetry. Remember, brick symmetry mandates all sign permutations, not just the one with all coordinates negated.

Hmm. I still don't understand how exactly the shape of the torus/square/digon correspond to each other in the product.


The torus appears as the crindal hole through the product. The square and digon correspond to the same parts of the crind-like product that they do in the original crind - remember, a crind is the RSS (circle as P) product of a square and a digon.

And I'm still in the dark about why P must have brick symmetry.


I never really arrived at a proper explanation of this, but I think it's something to do with uniqueness of the product. If you used a non-brick as P, say, a pentagon, the product would not be well defined, there would probably be multiple different possibilities (like how the square root has both a positive and negative result), and they would probably be rather ugly shapes too.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: SSC2 [Split from "I live!"]

Postby quickfur » Fri Nov 18, 2011 10:51 pm

Keiji wrote:
quickfur wrote:
Keiji wrote:[...] thus, one can compare for all such possible permutations, or use a canonical ordering.

And this is where you will have trouble. :) Comparing all possible permutations is infeasible for large polytopes, because the number of permutations is exponential, O(n!).


Even though it's exponential, it should still be fast enough for any polytope whose compressed incidence matrix fits on my (1080p) screen! And I can't really think of a reason why we'd want to mess with polytopes more complicated than that, at least in the near future.

What about the omnitruncated 120-cell, which has 14400 vertices? And the omnitruncated 10-cube ...?

the numbers reference row/column numbers


Sorry, what? I'm pretty sure they don't. The numbers inside the matrix have nothing to do with row and column numbers; the information is combined by such methods as picking a starting cell, then reading cells with the same row number as the original cell's column number. Row and column numbers are never combined or compared with the number that actually occupies a cell.

Obviously I still don't quite understand how the notation works. :oops: I tried looking at your link but it's not obvious what exactly the numbers refer to.

it's not obvious what exactly is canonical for a given polytope. You may end up having to examine the entire structure of the polytope before you can decide what is canonical, and then you have to go and swap those rows/columns around and renumber things.


And in existing notations we already have to do this anyway, so how is it a disadvantage?

Existing notations are more compact, for one, so it would be faster to apply algorithms to them. Requiring a potentially exponential algo is unacceptable IMHO. Even with SSC2, the number of shapes with multiple descriptions is relatively small; you could just compile a table of equivalences indicating which of several alternatives is considered canonical. Then comparison would be O(n) modulo a lookup in that table for the relatively rare exceptional cases. This would be far more efficient than attempting to canonicalize a full-fledged incidence matrix.

The one advantage of incidence matrices, though, is that it can handle literally any polytope you throw at it -- normal polytopes, projective polytopes, complex polytopes, abstract polytopes, anything. It will be kinda hard to tell if a given matrix represents a valid incidence matrix (you have to walk the entire structure to make sure there are no weird parts), and not all curved shapes may be representable, etc.. But if that's not a priority, then by all means use incidence matrices. :)

All I'm saying is that you need to decide what is important and what isn't, when choosing a representation.

[...] So you can, for example, do brick products with a slant-squished torus with a 0-radius hole, for example. I'm sure that 0-radius hole will cause headaches, if the slant-squish transform doesn't. :evil:


You can stretch it along an axis, yes, but you can't do something like slanting, because that would break the brick symmetry. Remember, brick symmetry mandates all sign permutations, not just the one with all coordinates negated.

Oh, right. :oops: What about the 0-radius torus hole though?

Or, for that matter, what about a random cloud of points (e.g. a cantor set) in one octant reflected across the coordinate planes so that they have brick symmetry?

I guess I'm just trying to understand how the brick product works: what it does with some given point (a,b,c) in P and what does this point do to the arguments in the product.

[...] The torus appears as the crindal hole through the product. The square and digon correspond to the same parts of the crind-like product that they do in the original crind - remember, a crind is the RSS (circle as P) product of a square and a digon.

I'll have to admit I'm not really following here. Why does the torus produce a hole, and why is the hole a crind?

And I'm still in the dark about why P must have brick symmetry.


I never really arrived at a proper explanation of this, but I think it's something to do with uniqueness of the product. If you used a non-brick as P, say, a pentagon, the product would not be well defined, there would probably be multiple different possibilities (like how the square root has both a positive and negative result), and they would probably be rather ugly shapes too.

I guess I won't fully understand this before I understand just what exactly the brick product does. :)
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: SSC2 [Split from "I live!"]

Postby Keiji » Fri Nov 18, 2011 11:07 pm

I think you ought to re-read the Wikipedia page on this subject that I linked to before, because you seem to be misunderstanding something to do with the compressed incidence matrix format.

quickfur wrote:What about the omnitruncated 120-cell, which has 14400 vertices? And the omnitruncated 10-cube ...?


As you know, the cosmochoron has 600 vertices. Yet there are only four rows and columns in this table: one per dimension, because the shape is regular. A uniform polytope won't have very many more rows/columns than a regular one.

Obviously I still don't quite understand how the notation works. :oops: I tried looking at your link but it's not obvious what exactly the numbers refer to.


Again, the Wikipedia article explains this.

What about the 0-radius torus hole though?


I haven't thought about it yet, but it should work "fine", as well as the negative-radius torus hole. The product will just inherit that apex or self-intersection.

Or, for that matter, what about a random cloud of points (e.g. a cantor set) in one octant reflected across the coordinate planes so that they have brick symmetry?


If P were the 2D figure composed of points {<0, 1>, <1, 0>, <0, -1>, <-1, 0>}, then P{square, hexagon} for example would yield a monoframe square oriented in the xy plane, with a monoframe hexagon oriented in the zw plane, both centred at the origin.

If you had multiple "concentric" copies of that set of points as P, you'd get similarly concentric copies of the other shapes.

Points not on the axis would lead to some kind of morph between the two shapes. This I don't have a formal definition for yet.

I'll have to admit I'm not really following here. Why does the torus produce a hole, and why is the hole a crind?


The torus produces a hole because the torus itself contains a hole. The outer surface and the hole are crinds (in the cross-section, anyway) because if the torus was instead a circle (and the last operand removed) the result would be a crind. Remember, any brick product will "inherit" features from P, as well as the other operands. Do note that the hole produced is not an inaccessible pocket - it is accessible from 4D in the same way that the hole in a torus is accessible from 3D, but my description didn't show this clearly because I was talking about cross-sections.

I guess I won't fully understand this before I understand just what exactly the brick product does. :)


You're not the only one who wants to understand exactly what the brick product does. :P Bowers could really help here.
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: SSC2 [Split from "I live!"]

Postby quickfur » Sat Nov 19, 2011 2:15 am

Keiji wrote:I think you ought to re-read the Wikipedia page on this subject that I linked to before, because you seem to be misunderstanding something to do with the compressed incidence matrix format.

Oh, you're talking about that format? :oops: :oops: :oops: *punches self* *goes and hides in the corner*

[...snipped lots of good stuff...] You're not the only one who wants to understand exactly what the brick product does. :P Bowers could really help here.

Bowers is an absolute genius, but unfortunately he's not very good at communicating his brilliant ideas to others. Have you seen his array notation? It's an absolutely amazing piece of work for someone with only elementary math education, transcending the best notational systems professional mathematicians invented for large numbers, and on the verge of reaching the finite equivalent of the so-called "large Veblen ordinal". However, his explanations take a lot of effort to understand, because he uses his own, non-standard terminology, and often fails to state the assumptions/definitions he's working from. Sometimes I have a hard time understanding what exactly he's trying to say. For example, for a long time I thought his array notation didn't even reach the finite equivalent of Gamma_0, and thought I could do better, but upon closer inspection, I discovered that it far transcends it.

Nevertheless, the guy's undeniably an absolute genius.

But speaking of Bowers and notations, have you ever read about his regular polytwisters? They are essentially duocylinders where the circular tori are replaced by polygonal spiral tori. They have no vertices but do have spiral ridges and two surchorixes that are completely symmetrical, so they are regular objects. Now there's something that cannot be represented by incidence matrices. :XP:
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: SSC2 [Split from "I live!"]

Postby Keiji » Sat Nov 19, 2011 1:16 pm

quickfur wrote:But speaking of Bowers and notations, have you ever read about his regular polytwisters? They are essentially duocylinders where the circular tori are replaced by polygonal spiral tori. They have no vertices but do have spiral ridges and two surchorixes that are completely symmetrical, so they are regular objects. Now there's something that cannot be represented by incidence matrices. :XP:


Yes, I've seen them, though they didn't make much sense to me!

(By the way, did you notice my topic on SSC3?)
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Re: SSC2 [Split from "I live!"]

Postby Deedlit » Sun Dec 25, 2011 11:41 am

Bowers is an absolute genius, but unfortunately he's not very good at communicating his brilliant ideas to others. Have you seen his array notation? It's an absolutely amazing piece of work for someone with only elementary math education, transcending the best notational systems professional mathematicians invented for large numbers, and on the verge of reaching the finite equivalent of the so-called "large Veblen ordinal". However, his explanations take a lot of effort to understand, because he uses his own, non-standard terminology, and often fails to state the assumptions/definitions he's working from. Sometimes I have a hard time understanding what exactly he's trying to say. For example, for a long time I thought his array notation didn't even reach the finite equivalent of Gamma_0, and thought I could do better, but upon closer inspection, I discovered that it far transcends it.


Hello, quickfur. I am very curious to see your analysis of Bowers' notation. I find his system quite interesting, but I do not believe it transcends the notations used by professional mathematicians. One such notation system is the Gregorczyk-Wainer hierarchy, defined by

F0(x) = x + 1
Fa+1 (x)= Fax (x)
Fa (x) = Fa[x] (x) when a is a limit ordinal

for a an ordinal and x a natural number. a[x] is the xth member of the fundamental sequence of a, a natural sequence whose limit is a; so for instance, omega[x] = x, omega*2[x] = omega + x, omega^2[x] = omega*x, omega^omega[x] = omega^x, etc.

Fa gets faster and faster growing for larger and larger a, so the only limit to the Wainer hierarchy is how high an ordinal we can define with a constructive ordinal notation. Professional mathematicians have carried constructive notations for ordinals VERY far, much further than the large Veblen ordinal (the next important ordinals are the Bachmann-Howard ordinal and the ordinal psiOmega_1(Omegaomega, and this is just the start), so the largest computable functions in the Wainer hierarchy are quite unimaginable!

As for Bowers' notation, his original array notation is equivalent to the Wainer hierarchy below the ordinal omega^omega. His extended array notation in n dimensions is equivalent to the Wainer hierarchy below the ordinal omega^omega^n. As for his BEAF, his functions do not seem to be sufficiently well defined. They depend on exactly what constitutes a structure, and for each structure, what constitutes the set of previous structures. For example, is (X+1)^X a structure? What about (X^^^X)^X? If the previous structure to X^^^X is X^^^p, what is the previous structure or structures to X^(X^^^X)? (Note that X^(X^^^p) doesn't work, since this would drop the function below X^^^X.) I can make sensible rules for his tetrational structures - this corresponds to the Wainer hierarchy below epsilon0 - but trying to make sense of pentational structures and beyond turned into a great big mess. We're still way below Gamma0. I see no reason to believe his notation reaches up to the large Veblen ordinal, so the Wainer hierarhcy goes very much farther with known ordinal notations.

Can you help decipher pentational structures and beyond?
Deedlit
Mononian
 
Posts: 5
Joined: Sun Dec 25, 2011 10:58 am

Re: SSC2 [Split from "I live!"]

Postby quickfur » Mon Dec 26, 2011 4:02 am

Deedlit wrote:
Bowers is an absolute genius, but unfortunately he's not very good at communicating his brilliant ideas to others. Have you seen his array notation? It's an absolutely amazing piece of work for someone with only elementary math education, transcending the best notational systems professional mathematicians invented for large numbers, and on the verge of reaching the finite equivalent of the so-called "large Veblen ordinal". However, his explanations take a lot of effort to understand, because he uses his own, non-standard terminology, and often fails to state the assumptions/definitions he's working from. Sometimes I have a hard time understanding what exactly he's trying to say. For example, for a long time I thought his array notation didn't even reach the finite equivalent of Gamma_0, and thought I could do better, but upon closer inspection, I discovered that it far transcends it.


Hello, quickfur. I am very curious to see your analysis of Bowers' notation. I find his system quite interesting, but I do not believe it transcends the notations used by professional mathematicians. One such notation system is the Gregorczyk-Wainer hierarchy, defined by ...

I'm sorry, what I said was not very precise. What I had in mind was notations for large numbers like Knuth's up-arrow notation or Conway's chained-arrow notation. What you said about ordinal-indexed fast-growing function hierarchies is true, that at large ordinals they far transcend Bowers' notation. In fact, after I (re)discovered the correspondence of ordinals to fast-growing functions, I myself use ordinal indices to measure the growth rate of functions and by extension, large number notation systems.

However, the notion of transfinite ordinal, in a sense, is "cheating" - we are jumping up to the transfinite and then coming back down to the finite. Bowers' notation captures the essence of composition and diagonalization (which is essentially what ordinal-indexed function hierarchies are all about) without needing to appeal to the transfinite at all. It is very simple in its basic concept, yet it easily surpasses common notations such as Knuth's or Conway's notations. On analysing his notation, one cannot help but notice that his increasingly larger "structures" (or whatever idiosyncratic term he uses to describe them) are essentially all about composition and diagonalization. You first have linear arrays, then you diagonalize them to 2D arrays, then 3D arrays, the n-D arrays, then what is essentially arrays of n-D arrays, etc.. This kind of recursive diagonalization is the essence of ordinals (diagonlization ~ supremum of a set of ordinals, by definition another ordinal), and the essence of ordinal-indexed functions. Now, each additional element of these structures represents a truly mind-boggling leap in terms of growth rate (for example, I believe Conway's notation only reaches to the power of 5-element arrays at best). Assembling these things into n-D arrays and larger structures like tetrational arrays, etc., represent truly huge leaps in diagonalizations.

I have not conducted a rigorous analysis into exactly how far this notation goes, but based on what I understand, it must have the potential to reach up to Fa where a is the large Veblen ordinal. Whether or not it actually attains to that, I can't say; it seems that Bowers just left off after tetrational arrays, pentational arrays, etc., with a general note that you can keep building larger structures this way. But it definitely has the potential to at least span the functions indexed below the large Veblen ordinal. And Bowers achieved this without the slightest notion of a transfinite ordinal.

[...] As for Bowers' notation, his original array notation is equivalent to the Wainer hierarchy below the ordinal omega^omega. His extended array notation in n dimensions is equivalent to the Wainer hierarchy below the ordinal omega^omega^n.

Actually, this is what I had thought too, when I first examined his notation. In fact, I developed what I thought was a superior notation with my "exploding tree function", that spans functions indexed below the Fefermann-Schütte ordinal Gamma_0. However, after some correspondence with him and a more careful analysis of his notation, I discovered that my function attains to only 5-element arrays.

Looks are deceiving, because it's tempting to collapse his array notation into "one element = one diagonalization", which is actually far from the truth. Although Bowers' linear arrays are written in a linear form, they do not "scale" in a linear way at all. Every element added to a linear array represents a gigantic leap in the degree of diagonalization. And this is just linear arrays. Even a 2D array with a single element in the second row represents the effect of linear arrays multiplied N-fold, where N is the value of an arbitrary linear array. In other words, the second row does not represent a simple extension of the first row; it takes the value of a linear array (which in itself is a gigantic number) and uses that value to derive the length of the linear array produced by decrementing the second row element by 1. Every decrement of the second row element applies a linear array function to the length (not the value!) of the linear array in the result. And when the second row has multiple elements, this kind of feedback diagonalization is simply unimaginable.

And since 5-element arrays already reach Gamma_0, it's conceivable that somewhere in the n-dimensional array structure the small Veblen ordinal must have been passed, so Bower's notation is well on the way to the large Veblen ordinal.

As for his BEAF, his functions do not seem to be sufficiently well defined. They depend on exactly what constitutes a structure, and for each structure, what constitutes the set of previous structures. For example, is (X+1)^X a structure? What about (X^^^X)^X? If the previous structure to X^^^X is X^^^p, what is the previous structure or structures to X^(X^^^X)? (Note that X^(X^^^p) doesn't work, since this would drop the function below X^^^X.) I can make sensible rules for his tetrational structures - this corresponds to the Wainer hierarchy below epsilon0 - but trying to make sense of pentational structures and beyond turned into a great big mess. We're still way below Gamma0. I see no reason to believe his notation reaches up to the large Veblen ordinal, so the Wainer hierarhcy goes very much farther with known ordinal notations.

Can you help decipher pentational structures and beyond?

First, I would say that his notation definitely surpasses Gamma_0 by far. The thing to note about his system is that it doesn't map very well to traditional ordinal notations because what he's diagonalizing is not the function (or arithmetic operator, if you like to think of it that way), but rather the degree of diagonalization. Consider a generic binary operator #. Given a#b, where a and b are two numbers (for our purposes let them be natural numbers), there are three ways to create a new operator @ by diagonalization:

1) Left-diagonalize: a@b = (((a#a)#a)#a ... )#a (b occurrences of a). This is not very effective; when # is exponentiation, for example, @ is merely multiplying the exponent by b.

2) Right-diagonalize: a@b = a#(a#(a#...#a))) (b occurrences of a). This is more powerful; it takes exponentiation to tetration, and tetration to pentation, etc..

3) "Middle"-diagonalize: let n be the degree of #, defined as the index into the sequence exponentiation, tetration, pentation, ... etc.. Then we construct the new operator @ by evaluating a#a and plugging the result into the degree of the resulting operator. In other words, we're not merely going from exponentiation to tetration; we're making a quantum leap from exponentiation to n-ation where n = a#a. But where does b come into the picture? It represents the level of nesting in the evaluation of the degree of the resulting operation. That is, if b=3, what we do is evaluate n1=a#a, then construct an operator of degree n1, let's call it {#1}, then evaluate n2=a{#1}a, then construct an operator of degree n2, call it {#2}, then evaluate n3=a{#2}a, then construct an operator of degree n3, call it {#3}, then evaluate a{#3}a. So we're not merely diagonalizing function composition; we're diagonalizing the number of diagonalizations applied to each level of function composition. This is the essence of what Bowers is doing.

Let's take a concrete example. To make things easier to read, let's adopt the convention that operators will be denoted by braces {} in infix notation. For the base case, let {1} = exponentiation, so a{1}b = a^b. For each level in the Grzegorczyk hierarchy, we will increment the number inside the brace, so {2} = tetration, {3} = pentation, etc.. Now, consider a 3-element array in Bowers' notation: <3 4 5>. First off, this does not mean 3{4}5 at all!! What it means is you first evaluate 3^5 = 243, and create the operator {243}. Use that operator to evaluate n1=3{243}5. This gives a truly huge number -- remember that {243} is the 243'rd function in the Grzegorczyk hierarchy. Not mere heptation, but already far beyond it. Now create the operator {n1} and evaluate n2=3{n1}5. This is an even huger number, being created by the (n1)'th function in the Grzegorczyk hierarchy. Now create the operator {n2}, etc.. Repeat this process a total of 4 times, as indicated by the 2nd element in the 3-element array. The final value of the array, then, is 3{n4}5.

So let's take a look at what's happening here. Consider the array <3 1 n>. The second element being 1 means a single level of nesting, so first we evaluate 3^n, then look up the (3^n)'th function in the Grzegorczyk hierarchy and use that to evaluate 3{3^n}n. As n varies, we go up the Grzegorczyk hierachy in exponential steps. In other words, this simple array already transcends the Ackermann function. But this is only a single level of nesting. If the middle element were 2, for example, we first compute 3{3^n}n, then use that to find the next operator in the G hierarchy, then apply that operator to get our final result. At this point, we're climbing the G hierarchy not exponentially, not tetrationally, but at the speed of the (3^n)'th function of the G hierarchy. This is already far beyond any sort of diagonalization based on the Ackermann function, and this just by a mere increment of the second element of the array. Remember that each level of the G hierarchy represents a level of diagonalization, so we're abstracting away the process of diagonalization and making the number of diagonalizations grow at an incredible speed.

Going back to my point about the 3 possible ways to diagonalize an operator, left-diagonalization essentially means evaluate a#b, then plug the value of that into the left operand. Right-diagonalization means evaluate a#b, then plug the value of that into the right operand. But what Bowers is doing is not any mere trivial substitution of operands. He is substituting the operator itself with a vastly more powerful operator created by going up (a#b) levels in the G hierarchy. Then he uses that new operator to derive an even huger number, and using that huge number to go up the G hierarchy even faster. And this is done recursively. I haven't gotten to 4-element arrays, but just to give you an idea, what happens in a 4-element array is that the number of nesting levels itself is set to a huge number obtained by applying an operator to the operands. IOW, evaluate a#b, then use the value of that as the number of nesting levels to apply in the reduction to 3-element arrays. For example, if you start with 3^4=81, that means in the next step you're repeating the process of (evaluate a{#n}b, substitute into index into G hierarchy) a total of 81 times, each time replacing the operator with a vastly more powerful function and using that to get the index of the next operator. The growth rate of these things increase ridiculously fast, and climbs up the Wainer hierarchy very quickly.

The Wainer hierarchy is based on right-diagonalization, that is, at each successor ordinal you're just making n recursive substitutions into the right operand of the operator in question. What Bowers has done is not to mess with the operands, but what amounts to messing with the ordinal index itself. So he's not merely going up the Wainer hierarchy in baby steps; he's leaping up with ever larger steps, each one ridiculously larger than the one before. The fact that he does this without even a notion of what an ordinal is, is something truly commendable. His only "fault" is that his scheme is not easily mapped to the ordinal indices produced by right-diagonalization (i.e. diagonalized function composition), which makes it easy for people to misunderstand what he's actually doing.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: SSC2 [Split from "I live!"]

Postby Marek14 » Tue Dec 27, 2011 10:42 am

This inspired me to look at Bower's page again after many years. Looks like he has added new pages dealing with explanation of his notation.
Marek14
Pentonian
 
Posts: 1191
Joined: Sat Jul 16, 2005 6:40 pm

Next

Return to Other Polytopes

Who is online

Users browsing this forum: No registered users and 26 guests