Emerging Visualization Solution

Discussions about how to visualize 4D and higher, whether through crosseyedness, dreaming, or connecting one's nerves directly to a computer sci-fi style.

Re: Emerging Visualization Solution

Postby benb » Tue Feb 18, 2014 6:40 pm

At this point in considering "bow tie" configurations, the question of how much (or how little) skew may best serve the user occurs to me; even the cylindrical mapping mentioned by quickfur could be done within cylinders of different radii (i.e., projecting the first-person POV images onto varying degrees of flatness/curvature across a given interior surface span).

While it may take a bit more work on the back end, it seems to me that there is no a priori reason why users couldn't (or shouldn't) adjust the skews of their lateral views to suit their preferences/needs--this might give them display options as Keiji mentioned. I can even imagine users adjusting the skews of their lateral views as one might do to driver's- and passenger's-side rear-view mirrors in automobiles.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby benb » Tue Feb 18, 2014 7:08 pm

For what it's worth, it also struck me that a "bow tie" configuration that I might see on my computer screen is also something other than a representation of what is visible from a first-person perspective in a three-dimensional spatial manifold: it is a representation of a third-person perspective on a first-person perspective in a three-dimensional spatial manifold.

This is not merely a trivial categorical distinction; the representation of a third-person perspective may cue the user to recognize that any single "bow tie" is not something with which s/he is bound to identify and that it offers only a partial glimpse of what exists in a four-dimensional manifold. By adopting multiple perspectives--perhaps simultaneously, perhaps by quickly cross-referencing two "bow ties"--more complete pictures of the space may arise.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby benb » Tue Feb 25, 2014 10:16 pm

I've created a rough mock-up of the "bow tie" configuration mentioned in previous posts in this thread and have written a little about it. Here's a link:
https://osf.io/dxjeo/wiki/DoubleRainBowTieDisplay/

Hopefully it is obvious that your feedback is very much appreciated; thank you. If you or those you know would like to become more closely involved in the Hyperland effort, there is information about that on the home page: https://osf.io/dxjeo/wiki/home/
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Wed Feb 26, 2014 12:47 am

Nice.

What I had in mind was more of a cylindrical projection type approach, though, where the sharp transition between the front and lateral panels is eliminated, in effect creating a 180° surround-view that is continuous from left to front to right. But I understand that this would be far more difficult to implement than the 3-panel approach.

Another question I have wrt the double rainbow / bow-tie view, is what happens with diagonal elements in 4D? That is, suppose we have a 3x3x3x3 tesseract-shaped room, which we divide into 3*3*3*3=81 smaller tesseracts ("tessies"). Suppose we hang a unique object in each of these tessies (say marbles of a different color, or a plaque bearing the coordinates of the tessie), except the middle one. Suppose the user is standing in the space of this middle tessie, and looking in, say, the (0,0,0,1) direction (let's assume the ordering (w,x,y,z) for coordinates). From what I understand of the double rainbow display, the top row of panels should show the marbles (or plaques) hanging at the tessies in the directions (0,-1,0,1), (0,0,0,1), (0,1,0,1) from the user, and the bottom row of panels should similarly show the marbles hanging in the directions (-1,0,0,1), (0,0,0,1), and (1,0,0,1). Correct?

My question then is, what about the marbles hanging in the (0,0,±1,1), or (±1,±1,0,1), or (±1,0,±1,1), etc., directions? Or, for that matter, (±1,±1,±1,1)? Clearly, for a hypothetical "native" 4D viewer, all of these objects should be equally visible, since they lie on a flat (hyper)surface in front of him. But where would they be shown in the double rainbow / bowtie display? And how is the "diagonalness" of the objects in the off-axis tessies indicated in the display?
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Wed Feb 26, 2014 1:26 pm

@quickfur: I think I understand what you're asking, but I would like to both verify that and provide you with an empirical demonstration beyond a verbal description. To succeed, I'd like to make sure we've got the same coordinates and color-coding for objects in mind before loading them and generating a screen capture of the visualization. Fortunately, the scene language can be pretty straightforward for the sort of arrangement of objects you're describing; here is a link to an example of raw code--it yields tesseracts arranged in a 4D double helix: https://raw.github.com/bblohowiak/Hyper ... e%20Helix2

...and here is a link to a description of the details of the scene language more specifically: http://www.urticator.net/blocks/v4/language.html

For the example you've described, the regularity of the grid pattern sounds straightforward enough to generate, though I'm unsure at this time what the optimum distribution of object features across the object set would be to sufficiently address your inquiry. How might the characteristic traits of unique objects--most likely shape and color for now--best be distributed among the 81-hypercell space to help address your inquiry?
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Wed Feb 26, 2014 6:25 pm

All that's needed is to be able to uniquely identify the tessies based on their relationship with where the player is standing. It doesn't have to be tied to shape or color, though I suspect that would be the simplest to implement, but could also be an arbitrary label indicating the coordinates of the tessie.

Thinking about this a little more, I think it should be possible to reduce the labels / shapes / etc. to just 4 kinds, as shown in the following diagram of the contents of a 3x3x3x3 tesseract (if you'll excuse the ASCII art):
Code: Select all
434   323   434
323   212   323
434   323   434

323   212   323
212   1X1   212
323   212   323

434   323   434
323   212   323
434   323   434

Each 3x3 square represents a 2D slice of the tesseract.

X represents where the player stands.

The digits indicate where the corresponding tessie is located relative to the player; 1 means it lies directly along one of the axial directions (along the coordinate axes); 2 means it's on a diagonal between a pair of coordinate axes, 3 means it's on a diagonal between three axes, and 4 means it's on a diagonal between 4 axes. You could also think of it as the Manhattan distance from the tessie to the player's location.

Now obviously the player (or a hypothetical 4D observer) won't be able to see all of the tessies at once; but suppose he was looking, say, to the right within the square that he's standing in according to the above diagram, that is, at the tessie marked "1" immediately following the X. Then he should be able to see (at least) the following tessies (I blanked out the tessies that are not visible):
Code: Select all
..4   ..3   ..4
..3   ..2   ..3
..4   ..3   ..4

..3   ..2   ..3
..2   .X1   ..2
..3   ..2   ..3

..4   ..3   ..4
..3   ..2   ..3
..4   ..3   ..4

That is to say, he should be able to see at least the 27 tessies that tile the cube-shaped facet of the tesseract that he is looking at (assuming an at least 45° viewing frustum from the location marked X). The "1" immediately following the "X" in the diagram therefore represents the middle of this cube-shaped facet (that is, the centroid of the cube).

Now, from what I understand of the double rainbow display, he should be able to see more than this, since the double rainbow view represents a 180° panorama around the forward and ana directions. I don't know exactly which tessies would be visible in this expanded view, but in my understanding, it should be some subset (or perhaps the whole set) of:
Code: Select all
.34   .23   .34
.23   .12   .23
.34   .23   .34

.23   .12   .23
.12   .X1   .12
.23   .12   .23

.34   .23   .34
.23   .12   .23
.34   .23   .34

This represents everything that lies on or in front of the hyperplane perpendicular to the direction he is looking at, so assuming the double rainbow display doesn't include things behind the player, it seems reasonable to me that this diagram represents the maximal view one can obtain with a 180° field-of-vision.

What I'm particularly interested in, is the visibility of the tessies marked "2", "3", and "4". Which of them from the above diagrams would be visible in the double rainbow display? Does it include at least all the tessies in the second diagram? Does it include the entirety of the third diagram, or just a subset thereof?

The answer should be clear, if you could construct the above model and run it through your simulator, and count how many 1's, 2's, 3's, and 4's (or their shape/color equivalents) appear in the output. For example, for the second diagram, there should at least be a 1, six 2's, twelve 3's, and eight 4's, in a cube-shaped arrangement (how would that appear in the double rainbow display?). Similarly, for the third diagram, there should be up to as many 1's as indicated in the diagram, and ditto for the 2's, 3's, and 4's.

(Note that all of the above assumes that all of the tessies are not solid; they just represent tesseract-shaped spaces that constitute the 4D bulk of the larger 3x3x3x3 tesseract. The numerical labels / shape / color are small objects placed within the corresponding spaces to serve as a convenient way of determining which of the spaces are visible from the location marked X.)
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Wed Feb 26, 2014 9:13 pm

@quickfur: I think I understand more about where you are coming from and I wonder if the way that I am thinking of it may prove sufficient for your needs (or if it will illuminate where some further clarification could prove helpful!). I respect your "tessies" idea, and I wonder if the idea of points on a line of a given slope might also be of use to accomplish similar ends for the calculation and coding of specific coordinates. For example, could it work if we take the origin as the point of observation and then add:

1) points that are collinear with the axial directions indicated by the presence of certain objects of a given class (Class 1)
2) points that are collinear with lines that bisect the planes defined by all possible pairs of the axial directions indicated by the presence of certain objects of a given class (Class 2)
3) points that are collinear with lines that pass through the centers of cubes defined by all possible triplicities of the axial directions indicated by the presence of certain objects of a given class (Class 3)
4) points that are collinear with lines that pass through the centers of hypercubes defined by all possible quadernities of the axial directions indicated by the presence of certain objects of a given class (Class 4)

?
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Wed Feb 26, 2014 10:59 pm

benb wrote:@quickfur: I think I understand more about where you are coming from and I wonder if the way that I am thinking of it may prove sufficient for your needs (or if it will illuminate where some further clarification could prove helpful!). I respect your "tessies" idea, and I wonder if the idea of points on a line of a given slope might also be of use to accomplish similar ends for the calculation and coding of specific coordinates. For example, could it work if we take the origin as the point of observation and then add:

1) points that are collinear with the axial directions indicated by the presence of certain objects of a given class (Class 1)
2) points that are collinear with lines that bisect the planes defined by all possible pairs of the axial directions indicated by the presence of certain objects of a given class (Class 2)
3) points that are collinear with lines that pass through the centers of cubes defined by all possible triplicities of the axial directions indicated by the presence of certain objects of a given class (Class 3)
4) points that are collinear with lines that pass through the centers of hypercubes defined by all possible quadernities of the axial directions indicated by the presence of certain objects of a given class (Class 4)

?

Ah, yes, that should work as well, and would address the underlying fundamental question that I have. The whole thing about the tessies is really just a convenient way to express the same underlying question, since we are all familiar with the structure of the tesseract. Your proposal of adding points collinear with the 4 different classes of directions would work equally well. So in that case, the question becomes, how many lines of each class would be visible from the origin, and how would they be represented?
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Mon Mar 03, 2014 2:17 pm

@quickfur: I decided to go with three tesseracts as indicators of the slope of each region's line segments for each class as discussed in previous posts. There are four different colors of tesseracts (one for each of the general classes) and the size of the tesseracts on each line increases as they are more distal from the origin (which should allow a user at the origin to have an unobstructed view of tesseracts beyond the ones most proximate to it). I have attached an Excel file which carries this information, though it does have some of the scene language in it from which the geom code is based, as well. (That can be accessed in raw form at https://raw.github.com/bblohowiak/Hyper ... 20quickfur )

There are two videos to see the visualization (which seems to have unexpected behavior) in action:

https://archive.org/details/Diagonals1
https://archive.org/details/Diagonals2
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby benb » Mon Mar 03, 2014 2:18 pm

Oops! No Excel allowed. :(
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby benb » Mon Mar 03, 2014 4:05 pm

The aforementioned Excel file is now available as part of our file archive at this link: https://osf.io/dxjeo/files/
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Mon Mar 03, 2014 5:21 pm

Interesting. I did notice some odd behaviour: the appearance of seemingly stray lines to the upper left corner of the display. Is this a bug?
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Mon Mar 03, 2014 5:58 pm

@quickfur: The "stray lines" was the anomaly I observed, as well. I hadn't seen anything like that before and am waiting on more information regarding what (if anything) it signifies.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby benb » Wed Mar 05, 2014 11:55 am

Here is another version of the "diagonals" geom; same tesseract arrangement as before, this time with classes 2-4 differentially colored as per negative values on the y-axis.

Video:
https://archive.org/details/YAxisDiagonals1

Raw Geom File:
https://raw.github.com/bblohowiak/Hyper ... 0Diagonals
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Wed Mar 05, 2014 3:35 pm

Very interesting. I think I understand the double rainbow display a little better now. :) Thanks!
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby quickfur » Thu Mar 06, 2014 6:43 am

Follow-up question: how does the double rainbow display handle visibility clipping? Say, looking through a wall with a cube-shaped window cut through it, at some object on the other side that's larger than the window. Which parts of the object would be visible in this case?
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Thu Mar 06, 2014 6:49 pm

@quickfur: The clipping issue is still under development. At present, the opacity of a given object is determined by whether or not the user may pass through it as well as the user's proximity to it; in cases where the user cannot pass through the object in question, the greater the proximity, the greater the opacity.

In a "windowless" scenario, some basic examples can be seen in this segment:
http://www.youtube.com/watch?v=DJl8VFQ4TsY
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Mon Mar 10, 2014 5:23 am

I've seen that video before, though at the time I didn't understand how the double rainbow display worked so it didn't make sense to me.
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Fri Mar 28, 2014 7:43 pm

Quick update on a potential future direction:
https://osf.io/dxjeo/wiki/GravitationalTranslator/
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby benb » Mon May 19, 2014 2:18 pm

From talk on this board to a real thing; a quick demo of Double Rainbow Flare:
http://youtu.be/v5Zeh7DLdM4
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Thu May 22, 2014 12:21 am

benb wrote:Quick update on a potential future direction:
https://osf.io/dxjeo/wiki/GravitationalTranslator/

Sorry for the verrrrry late reply... have been extremely busy.

Interesting concept. It's not clear to me, though, how the mapping between "4D gravity" vs. 3D gravity is done? In 4D, there are so many more possible orientations one can be in, compared to 3D. Assuming the current line-of-sight is kept fixed, there are at least 3 degrees of freedom possible in terms of how the user can be oriented relative to gravity. Would the gravitational translator provide distinct feedback for each of the 3 degrees of freedom? If so, how would it map the 4D orientations to the (presumably) 3D virtual reality simulator the player is seated in?

If we allow the line-of-sight to also vary, then we have 6 degrees of freedom by which to orient the player, which, if we fix the direction of gravity, means 3 degrees of freedom in terms of which direction the pull of gravity will be felt in. How does this proposal account for this, when it maps the 4D orientation to 3D?
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Thu May 22, 2014 1:17 am

I had been thinking that it would be impractical to map multiple degrees of freedom from a simulated 4D space to 3D, however, I reckoned that I could at least correlate the rotation that allowed me to see orientation indicators in the cycle of [front, up, back, down] to the sensation of sitting normally, sitting with one's back to the ground, sitting upside down, and sitting facing the floor. As @quickfur mentioned, assuming the current line-of-sight remains fixed is part of what makes it work. Ideally, the user could arbitrarily assign a distinct gravitational line-of-sight on the fly for adaptive reorientation.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Thu May 22, 2014 1:28 am

Ohhh I get it... you're using 3D gravity as a way of orienting oneself in 4D, not simulating 4D gravity, right? Basically, fixing the perpendicular vector to the 4D vertical, which is needed in order to resolve orientation. IOW, as you rotate in the left/right + ana/kata plane, the direction of gravity would rotate around the user? I'm not sure how well that would work in practice, since it will likely be unclear what its purpose is from the user's POV, but certainly it's an interesting approach.

Or did I misunderstand your intention?
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Thu May 22, 2014 2:01 am

As for the intention, yes--using 3D gravity as a way of orienting oneself in 4D.
Perhaps I can illustrate it best with an example...

Imagine being in a space capsule in which you can only see forward. To look up, you must rotate your entire capsule so that you are supine.
If your capsule is hovering on the surface of Hyperland, for example, you will feel gravity pulling you into the back of your capsule seat when looking up. You will feel it pulling you down toward the ground away from your seat when looking down.

As noted above, this depends upon specific axes being regarded as "forward" and "up/down." But I think that users could define the "forward" and "up/down" in ways that are insightful and perhaps provide great aid in orientation and navigation.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Thu May 22, 2014 3:31 am

Right, so basically this is using gravity to identify the so-called "right" vector, in ray-tracer parlance.

In 3D, basically you can fully specify a camera by specifying a location, a pointing direction (or equivalently, a target you're looking at, or a "lookat" vector), and an "up" vector not parallel to the lookat vector, specifying the vertical direction. These 3 vectors fully determine your position and orientation in 3D space.

In 4D, however, specifying a location, lookat vector, and up vector is not enough to fully determine your orientation in 4D space. There is still an extra degree of freedom leftover, corresponding with a rotation in the 2D plane perpendicular to both the vertical and the lookat vector. This last degree of freedom is fixed by a "right" vector, i.e., specifying which of the 360° of possibilities is pointing to the "right". This then fixes your position and orientation in 4D space.

So, what you're proposing here amounts basically to identifying the "right" vector with (3D) gravity, while the 4D vertical is independently fixed (by the world model, presumably) in a perpendicular direction.
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby ICN5D » Thu May 22, 2014 4:39 am

I checked out that latest youtube video. I'm still not sure what the two views mean. Are they the two ortho 2-planes, and we're seeing them edge on, or something? I noticed the independent rotations in each, which affected the other, stationary view. Shoot man, even mentally conceiving what a 3D hypersurface feels like in 4D is a challenging chore in itself! Let alone a graphical manifestation of it. I wish you guys maximum luck in this development. I'm curious to see how it evolves. It seems like there's still a lot of theory involved with how to orient oneself in 4D, in this program. Well, maybe not pure theory, but determining how to apply it here.
in search of combinatorial objects of finite extent
ICN5D
Pentonian
 
Posts: 1047
Joined: Mon Jul 28, 2008 4:25 am
Location: Orlando, FL

Re: Emerging Visualization Solution

Postby quickfur » Thu May 22, 2014 4:59 am

The two views represent two perpendicular orientations of what you'd see in 4D. Basically it's an alternative way of visualizing 4D, in which instead of the traditional 4D -> 3D projection, the 4D view is presented as two 3D views, oriented perpendicular to each other, so that the horizontal center of each view is identical to each other, but the stuff on the side stretches forth in perpendicular directions, one stretching left/right, and the other stretching ana/kata.

I'll freely admit that I have a hard time visualizing 4D in this way, but evidently Ben finds this split view more helpful.

Me, I prefer the traditional method of 4D -> 3D projection, where instead of a 2D retina like we have in 3D, the 4D view is projected onto a 3D "volumetric retina". The resulting image is therefore a 3D voxel array representing the snapshot of the 4D view. Of course, we have trouble perceiving a 3D voxel array directly, so generally a second projection is done from 3D to 2D (so that it can be shown on the screen!), and various surfaces in the 3D image are made transparent so that we poor limited 3D beings can at least have a fighting chance of figuring out what exactly is shown in the 3D retina.

Using the projection method, visualizing hyperplanes is actually rather easy, at least once you're reasonably familiar with the approach. Just as in 3D, we don't actually draw full 2D planes (they're infinite, hence impossible to draw on paper!), but rather rectangular sections thereof, with the understanding that the edges of the rectangle represent where the plane is indefinitely extended in that respective direction, so to represent a 3D hyperplane in 4D, we can simply represent a cuboidal section of it, projection into 3D, with the understanding that the faces of the cuboid represents where the hyperplane extends indefinitely in that direction. The projective distortion on the rectangle (respectively the cuboid) therefore gives us a mental handle on the orientation of the plane (resp. hyperplane) in space.

Performing simple hyperplane intersections is also relatively easy -- they are just two intersecting cuboids with different perspective distortions (representing different orientations in 4D space), where a central rectangle area common to both cuboids represent (the projection of) the 2D plane that constitute their intersection in 4D.

Image

This is directly analogous to how, in depicting two intersecting 2D planes in 3D on paper, we draw two rectangles with different distortions, representing different 3D orientations, and draw a line between the areas of these rectangles to depict the 1D line that constitute their intersection in 3D space.

Image
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby ICN5D » Thu May 22, 2014 5:34 am

quickfur wrote:The two views represent two perpendicular orientations of what you'd see in 4D. Basically it's an alternative way of visualizing 4D, in which instead of the traditional 4D -> 3D projection, the 4D view is presented as two 3D views, oriented perpendicular to each other, so that the horizontal center of each view is identical to each other, but the stuff on the side stretches forth in perpendicular directions, one stretching left/right, and the other stretching ana/kata.


Okay, I see how it works, now. So, it is very closely related to the ortho 2-planes, where the center squares are the intersection between them. That'll take some getting used to in the Double Rainbow display. I do understand the concepts you illustrate, with the intersecting planes. It was moreso how to apply it to this different visualization strategy. It seems that the four corner squares, in the Double Rainbow display, are emulating the two perpendicular rings found in 4D duoprisms. These left-right and ana-kata squares are angled that way. It makes me think.... if six viewing panels can approximate 4D vision, then maybe 8 viewing panels can approximate 5D vision? That'll be for a later date, of course.

Actually, on a recent bike ride, I happen to see what 4D space would look like, by stacking an infinite number of XY planes along another, vertical plane. Well, maybe it's still a 3D way of feeling it, but the principle is there. First, I started with a linear stacking of them, as 3D space. Then, I took that stacking method and expanded it, into a whole extra plane itself, of infinite stacking XY sheets. I'll have to make a visual of that someday, because it was cool to almost feel that extra extent, while still contained within a cube-shaped grid. It's like your mind goes back and forth with seeing the two sheets branching off from each other. For every point on that ZW plane, there is another, whole XY plane stuck in it. Not sure if it can be fleshed out into a pic, maybe an animation of transparent stackings would work better.
in search of combinatorial objects of finite extent
ICN5D
Pentonian
 
Posts: 1047
Joined: Mon Jul 28, 2008 4:25 am
Location: Orlando, FL

Re: Emerging Visualization Solution

Postby benb » Thu May 22, 2014 11:03 am

@quickfur: The thing that I find most helpful about the Double Rainbow approach is the peripheral vision, as it were, that includes ana/kata and left/right options. If I want to turn in one of those directions, I'm not doing so blindly; the side panels show me what I can expect to see/turn to. Other visualization schemes tend to only show what is in front, which I find quite limiting when navigating space, especially because orienting oneself to interact with objects often requires rotational alignment between left/right and ana/kata axes.

@ICN5D: Yes; the logic of the forward-facing center panel with the side panels as indicators of peripheral phenomena in fivespace would suggest another set of panels, I assume making for a total of nine in a Triple Rainbow Flare configuration.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Thu May 22, 2014 2:47 pm

Actually, the usual camera settings for the classical 3D retina approach can be adjusted to use the equivalent of a wide lens camera in 3D. The default parameters that John McIntosh's 4D maze uses, for example, seems to have quite a narrow view angle, which is OK for realism, but in practice gives somewhat a feeling of tunnel vision. However, this is not inherent in the approach; you can adjust the parameters such that the view angle is much wider, in which case the periphery will begin to show up around the sides of the projection volume. It might also be possible to use the equivalent of some kind of spherical lens, where the projection can actually yield a view angle >180°.

One thing I like about the classical approach is the lateral symmetry: to a native 4D person, there isn't really a distinguished left/right vs. ana/kata direction; rather, it's a continuous 360° spectrum of lateral directions. Using the classical approach we can perform the projection into a cylindrical volume rather than the traditional cubical volume, to give a better feel for the symmetry of lateral directions in 4D. A reorientation in the lateral plane merely rotates the projection image around the axis of lateral symmetry, whereas with the double rainbow projection even a slight reorientation would introduce unfamiliar angles in the representation. Of course, for practical reasons of ease of implementation, we usually still use a distinguished left/right vs. ana/kata even in the classical approach, but conceivably, one could design a UI in which the mouse is used to indicate in which of the 360° possible lateral directions one wishes to turn. In this case, the symmetry of lateral directions in the representation becomes indispensible -- it would be difficult to comprehend the meaning of a non-axial turn if lateral orientations are not uniformly represented.
quickfur
Pentonian
 
Posts: 2486
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

PreviousNext

Return to Visualization

Who is online

Users browsing this forum: No registered users and 1 guest

cron