Emerging Visualization Solution

Discussions about how to visualize 4D and higher, whether through crosseyedness, dreaming, or connecting one's nerves directly to a computer sci-fi style.

Re: Emerging Visualization Solution

Postby quickfur » Tue May 27, 2014 7:28 pm

If the whole purpose is to postulate the existence of polychora, or perhaps curved objects and other such geometric constructs, then one needs not concern oneself with the fine points of subatomic physics. It is easy enough to simply postulate the existence of atoms as an axiom: there exist atoms, and they come in various varieties that give rise to certain macroscopic properties (without explicating the details of how this might happen). For most purposes, this is good enough -- not everyone is a subatomic physicist or quantum chemist, who need to worry about the fine details of electron spin, orbital energies, atoms radii, etc.. For everyday macroscopic effects, classical physics, which postulates indecomposible atoms as essentially little hard balls, is good enough.

Fluid mechanics, for example, in its simplest forms simply assume that a fluid is made of particles of unspecified size (other than that the size is insignificant compared to macroscopic measurements). The free use of calculus, which is basically the mathematics of infinitely small particles, for the most part makes no discernible difference as compared to the actual situation in the real world where particles are not infinitely small, but have finite size. This difference is immaterial to the study of fluid mechanics because we're interested in the bulk properties of the fluid, not in its individual constituents. Whether said constituents are 3D atoms made of a protonic nucleus surrounded by electrons in discrete orbitals, or some kind of 4D sphere-shaped jellybean with atom-like properties endowed by fiat, really doesn't matter.

Similarly, one does not necessarily need a complete theory of 4D electromagnetism to postulate that light consists of massless particles emitted from light sources and absorbed or reflected by surfaces. Or that these light particles have wave-like properties that grant each of them a frequency or wavelength. In fact, for the most part, one needs not even be concerned with wave-like properties of light at all -- light particles can be declared by fiat to have a property called "color", and that what kind of "color" light particles are emitted with, is an inherent, unanalysable property of the light source. Different solid objects may react differently to light particles of different color, reflecting or absorbing them, etc., -- again, as an inherent, unanalysable property of the atoms comprising the objects.

None of the fine points of quantum physics are really necessary unless we're trying to invent a 4D universe from first principles. Which, as I have said, is an interesting exercise in itself, but if the whole point is merely to analyze 4D geometry or fluid mechanics, then it's totally unnecessary overkill, like killing an ant with a nuclear warhead.
quickfur
Pentonian
 
Posts: 2435
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Tue May 27, 2014 7:38 pm

@quickfur: I agree with parsimony as preferable. At the same time, the assumptions that we've started with have lead to some apparent impasses, so I began questioning some of those assumptions.

I recognize that the granularity of the objects constituting a fluid may be more or less secondary to the fact that they behave together as a fluid; at the same time, I haven't seen many serious efforts to model the behavior of fluids in fourspace. Whether one starts with discreet objects and generalizes to the behavior of fluids or assumes fields of force and generalizes to object behaviors, one would still have more modeled than what appears in the present systems of which I am aware.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Tue May 27, 2014 7:47 pm

benb wrote:@quickfur: I agree with parsimony as preferable. At the same time, the assumptions that we've started with have lead to some apparent impasses, so I began questioning some of those assumptions.

Which impasses?

I recognize that the granularity of the objects constituting a fluid may be more or less secondary to the fact that they behave together as a fluid; at the same time, I haven't seen many serious efforts to model the behavior of fluids in fourspace. Whether one starts with discreet objects and generalizes to the behavior of fluids or assumes fields of force and generalizes to object behaviors, one would still have more modeled than what appears in the present systems of which I am aware.

I'm not aware of any attempt to model fluid behaviour in 4-space. So that's totally valid fresh territory to explore. :) (But then again, I am no fluid mechanic, so what do I know... :P)

As for fields of force -- isn't it sufficient to postulate that the constituents of the fluid have both an attractive force that holds them together as a single fluid, and a repulsive force that resists compression (to whatever extent)? Does the actual nature of the forces matter as far as fluid mechanics is concerned?
quickfur
Pentonian
 
Posts: 2435
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Wed May 28, 2014 12:55 pm

@quickfur: All of the reasons that supported the notion that "asking for 4D photorealism is a bit too ambitious" were the "impasses" to which I referred. Because it's my goal, I have to ignore its impossibility. ;)

As for agnosticism toward or arbitrariness regarding specific fields of force, I believe that the premises you offered generally hold; things that behave as if SOMETHING is capable of bringing them together into a cohesive whole and as if SOMETHING within them is strong enough to resist instant collapse via membership in that whole or by additional forces outside can exhibit the properties of a fluid when integrated into such a cohesive whole at the appropriate scale. A sufficient quantity of tesseracts, for example, could exhibit some properties of a fluid, depending upon their physics.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Wed May 28, 2014 2:25 pm

benb wrote:@quickfur: All of the reasons that supported the notion that "asking for 4D photorealism is a bit too ambitious" were the "impasses" to which I referred. Because it's my goal, I have to ignore its impossibility. ;)

Well, if you have a way of transmitting 3D data directly into the brain, bypassing the limitations of the 2D channel of the eyes, then it would no longer be an impasse. ;)

OTOH, even if full 4D photorealism is out of reach, that doesn't mean we can't try to get as close as possible. We may not fully get there, but at least we could get close.

In any case, I have found that true 4D visualization happens in the mind, not on the screen. The images on the screen are merely a crutch, a roadmap of sorts, a shadowy glimpse of the real thing. The true perception of 4D depth is a conscious act in the mind. In my experience, one has to learn to actively interpret things in a 4D way; if one's thinking hasn't reached that level, then no amount of photorealism will help one perceive 4D. One may even be under delusions of 4D visualization but it will not hold up against rigorous mathematical analysis. Due to the inherent limitations of the medium, it is impossible to physically convey the full contents of that 3D retina in an adequate way, but one can train oneself to interpolate between the displayed surfaces to "see", in the mind's eye, the textures that lie in between. With practice, this can come with ease, and one is able to form a more-or-less accurate 3D model of the contents of the retina in one's mind. Once this is mastered, one learns to extrapolate from that 3D model back into 4D via depth inference, and this is when true 4D visualization is achieved.

As for agnosticism toward or arbitrariness regarding specific fields of force, I believe that the premises you offered generally hold; things that behave as if SOMETHING is capable of bringing them together into a cohesive whole and as if SOMETHING within them is strong enough to resist instant collapse via membership in that whole or by additional forces outside can exhibit the properties of a fluid when integrated into such a cohesive whole at the appropriate scale. A sufficient quantity of tesseracts, for example, could exhibit some properties of a fluid, depending upon their physics.

Why tesseracts, and not 3-spheres? Or, if you prefer the discrete, 600-cells or 120-cells? I personally recommend the truncated 600-cell, which is much "rounder" than the 600-cell itself, and has strong parallels with the truncated icosahedron in 3D (aka the "buckyball", thus named for a very good reason). ;) Even the 24-cell family affords relatively "round" shapes that nevertheless retains the convenient characteristics of tesseractic symmetry, such as nice alignment with the coordinate axis (aka nicer-looking coordinates). Tesseracts are relatively low-symmetry in the grand scheme of 4D things.
quickfur
Pentonian
 
Posts: 2435
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Emerging Visualization Solution

Postby benb » Wed May 28, 2014 3:51 pm

@quickfur: Yes, a high degree of symmetry and roundness do sound like desirable traits in constituent parts of a hypothetical fluid substance; I mentioned tesseracts as an example for arbitrary reasons yet yours are preferable. While 3-spheres may seem an obvious choice, their perfection can make them difficult to render, calculate precisely, etc. Your other recommendations (600-, 120-, 24-cell) are now the ones to beat in my book.

Also, I am going to challenge the notion that retinas are 2D. I'll counter with the proposition that retinas are made of rods and cones--yes, they tend to be arranged in a planar fashion--and the brain interprets their discreet inputs as having specific correspondences to objective spatial orientations. But I don't think that, even with advanced techniques, we could "rewire" an optic nerve as a practical joke so that everything appeared upside down when the victim of the prank awoke--I don't think the neural processing is that location-specific. It's not like an operator connecting phone calls in the olden days.

For example, there are empirical experiments that have shown if I wear glasses that invert the vertical axis of my visual input, initially things will seem upside-down, but eventually I will adjust and I won't even notice--thing will appear right-side-up to me. Whether the brain is merely rotating the entire image or it is reassigning spatial values to each rod and cone, the point remains--the brain is good at identifying patterns and learning to anticipate them, even if that means giving up old ways of making sense of the world. But it's not rewiring my retina.

It may be that the bandwidth of retinal input is sufficient to convey information regarding 4D objects, though such input may take a prolonged period of adjustment to learn how to process mentally. Just as the metaphor of playing with blocks inspired John's recent coding efforts, learning to see in a world where 4D blocks are possible (or are correlated to the rewarding activities of daily living such as eating, interacting with others, and listening to music) may be much like learning to see in a world where 3D blocks are possible. Perhaps we need marathon sets of viewing binocular visual input that may look little to nothing like what we would normally recognize or comprehend as long as it possesses the fundamental patterns expressed through four degrees of freedom that constitute the phenomena of the simulated universe; eventually we'll recognize that there is a pattern at a subconscious level and that subconscious awareness, to the extent it proves helpful in engendering reward conditions, will emerge into consciousness not only experientially, but also as a volitional state.

The irony may be that photorealism in fourspace can exist, but there is no way currently extant to photograph in 2/3D what the mind may perceive when stimulated sufficiently to simulate a photorealistic fourspace and its phenomena.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby benb » Wed May 28, 2014 4:24 pm

This may have been brought up before--please let me know if it has--but the idea of a shape net could be useful. For example, if we're thinking of using a 3D retina to perceive fourspace, the shape net of the 3D retina occupies only 2D and thus should be feasible for our "2D" retinas. Stereoscopically aligned shape nets could align and aid with regard to depth cuing and the like, FWIW.

Is this an old idea?
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby ICN5D » Wed May 28, 2014 5:41 pm

Benb wrote:Whether the brain is merely rotating the entire image or it is reassigning spatial values to each rod and cone, the point remains--the brain is good at identifying patterns and learning to anticipate them, even if that means giving up old ways of making sense of the world.


That's the most important thing to remember right there. Whatever 4D visualizing tool comes out to be, it will be funny and exotic looking. As you stated, the brain is extremely good at pattern differentiation and recognition. I'm starting to appreciate the six squares view, especially in how it aids navigation in the 4D maze. One has only to become accustomed to how the visual input is arranged, how to make sense of it, and how to read it, instinctively.


benb wrote:@ICN5D: The blue grid is a "mat" that the objects sit on. It can help with reference/orientation (i.e., which way is down). What are your thoughts on it?



The blue lines look like a 2D projection of the objects above the " ground ". Are they shadows of some kind? They don't seem to be in a coordinate grid.
in search of combinatorial objects of finite extent
ICN5D
Pentonian
 
Posts: 1044
Joined: Mon Jul 28, 2008 4:25 am
Location: Orlando, FL

Re: Emerging Visualization Solution

Postby benb » Wed May 28, 2014 6:05 pm

@ICN5D: Are you looking at the blue mat/lines via a video I posted or as you explore the four-dimensional manifold yourself? I recommend the latter to get a feel for their characteristics. If you want them to look like a 2D coordinate grid, get some y-axis height above them and then turn down to look at them from above; that is what they will likely resemble. Of course, from other points they may appear different.
benb
Dionian
 
Posts: 48
Joined: Tue Jan 21, 2014 7:11 pm

Re: Emerging Visualization Solution

Postby quickfur » Wed May 28, 2014 6:40 pm

benb wrote:@quickfur: Yes, a high degree of symmetry and roundness do sound like desirable traits in constituent parts of a hypothetical fluid substance; I mentioned tesseracts as an example for arbitrary reasons yet yours are preferable. While 3-spheres may seem an obvious choice, their perfection can make them difficult to render, calculate precisely, etc. Your other recommendations (600-, 120-, 24-cell) are now the ones to beat in my book.

On the contrary, 3-spheres are dead easy to handle. They have a fixed radius in every direction, making collision detections dead easy; they can be represented with merely position and radius since orientation doesn't matter, and every projection of them is a (2-)sphere so you don't even need any calculations (just draw a 2-sphere!). About the only drawback they have is that they require solving quadratic equations if you're calculating ray intersections.

Discrete polytopes really only come into the equation when you want to integrate 3-spheres into what is essentially a polyhedral processing system (analogous to polygonal systems in 3D), or where your renderer can only handle polygons for whatever reason (e.g., the display hardware only supports polygons). The trouble with polytopes is that they require non-trivial representations (you have to somehow represent where all their vertices and bounding hyperplanes are), and they are not symmetric w.r.t. arbitrary rotations (i.e., you have to keep track of orientation), and collision detection has a linear time complexity w.r.t. the number of bounding hyperplanes you have.

If you're talking about fluid simulation, 3-spheres are your best bet, because they conveniently represent point particles (i.e. take their centers as the position of the point particles) that resist compression past a certain point (i.e., their non-zero radius). In terms of implementation, that's really just as easy as 4-vectors representing simulated points and a radius (which can be constant for the purposes of simulating a homogenous fluid), and various physical rules implemented as functions of position and radius. Using polytopes for this purpose only adds needless complications to the picture.

Depending on how you implement the simulation, you may not even need to care about the fact that these point particles are 3-spheres -- density, after all, can simply be a scalar field, so you could analyse the whole thing using calculus to express the constraints on density, in which case you can deal with an infinite number of point particles in one fell swoop.

Also, I am going to challenge the notion that retinas are 2D. I'll counter with the proposition that retinas are made of rods and cones--yes, they tend to be arranged in a planar fashion--and the brain interprets their discreet inputs as having specific correspondences to objective spatial orientations. But I don't think that, even with advanced techniques, we could "rewire" an optic nerve as a practical joke so that everything appeared upside down when the victim of the prank awoke--I don't think the neural processing is that location-specific. It's not like an operator connecting phone calls in the olden days.

For example, there are empirical experiments that have shown if I wear glasses that invert the vertical axis of my visual input, initially things will seem upside-down, but eventually I will adjust and I won't even notice--thing will appear right-side-up to me. Whether the brain is merely rotating the entire image or it is reassigning spatial values to each rod and cone, the point remains--the brain is good at identifying patterns and learning to anticipate them, even if that means giving up old ways of making sense of the world. But it's not rewiring my retina.

And here you touch on a point that I've brought up before in the past -- there are two schools of thought on this. One is that our brain is hard-wired for 3D perception, and that our 3D inclinations are inherent, and therefore "true" 4D vision is impossible. According to this school of thought, our brain (probably the visual cortex part of it) and our eyes constitute parts of a single system designed and optimized for 3D vision, and that this 3D specificity is built into the way the whole thing is physically put together, so it is impossible to change the way optical nerve signals are interpreted. Some evidence point to the fact that the bandwidth of the optical nerve is actually far smaller than that required to transmit all of the information received by the rods and cones to the brain, meaning that the processing of visual information already begins in the eyes, and the data is somehow encoded, compressed, or simplified before it even gets to the brain. Thus far, no one has been able to decode the correspondence between the image formed on the retina and the signals sent to the brain.

Another school of thought is that our 3D perception is entirely a construct of the brain: it receives a pair of disparate signals from either eye, which are similar yet differ in subtle ways, and as a result of the brain's tendency to prefer corresponding signals over mismatching signals, it invents the notion of 3D depth as a way to reconcile the disparity between the two signals. One oft cited example of this flexibility of the brain is the one you gave: feed the brain inverted visual images (well, actually, the right way up -- our natural retina actually sees everything upside down), and it eventually remaps the interpretation of the signals to match its own notions of which way is up. According to this school of thought, therefore, if the brain could somehow be fed signals that correspond with a pair of 3D images, then at some point the brain will invent the notion of 4D depth in order to reconcile the binocular disparity between them.

The truth is probably somewhere between these two positions, which touches on a very interesting subject: how does our optical system know which rods and cones to map to which positions on the 2D extent of the perceived image?

It is well known, for example, that the density of rods and cones in the retina is not constant; they are most concentrated around the center of vision, and their density drops as you move outward to the periphery. This means that they cannot be arranged in a regular tessellation that can be easily mapped to some internal 2D array representation. How then, does our optical system manage to perceive an undistorted image? Consider, for example, if you look at a square. Its image on your retina covers a physical square image over the rods and cones, but since the density of the rods and cones are non-constant, if the signals from them to the brain are placed on equal footing, then wouldn't you perceive the square as being distorted according to the density distribution of rods and cones in the retina? One may suggest that the brain compensates for this distortion by preferring the straight-edge interpretation of the square over the wavy-edge interpretation, but this doesn't explain why we can perceive the difference between straight lines and curved lines. Staring for long periods of time at a curved line does not eventually make us perceive it as a straight line, so obviously the brain somehow knows that the square's edges are straight, not wavy, in spite of the fact that the rods and cones that perceive its projected image are not evenly distributed in the retina. How does it know this? It's not as though there's a physical regular grid in the eye somewhere that the brain can use as an objective reference frame to decide exactly where the signal from a given cone or rod falls spatially!

So is there a physical basis to how our brain interprets the absolute position of the "pixels" (i.e. rods and cones) in the images from the optic nerves, or is it some kind of learned interpretation, where initially the brain is a blank slate, and in the process of learning to interact with the world around it and reconciling this with the visual consequences of this interaction, it begins to assign a spatial position to each received signal? Again, it seems the answer lies somewhere in between -- obviously the eyes in themselves only have limited processing capacity, and are unlikely to be able to reconcile visual images with other data known only to other parts of the brain, such as interpreting the experience of physical interactions and deducing the actual shapes of objects from these interactions. Yet given the evidence I cited above, it also appears that the brain isn't receiving the raw signals from every rod/cone, but some visual processing has already begun to take place prior to the images being transmitted to the brain via the optic nerve. So it can't be the case where the brain is getting the data as N input signals where N is the number of rods and cones, and it's building up its spatial interpretation of these signals from scratch.

All of this is very interesting because the answers will determine whether, and to what extent, we are equipped to be able to handle true 4D photorealism. Can the brain handle 3D input data? If so, in what form can it handle this, and how can this data be transmitted to it? Will it be able to construct a coherent interpretation of such stimulus as a 3D image? Will it be able to infer 4D depth from it?


[...] The irony may be that photorealism in fourspace can exist, but there is no way currently extant to photograph in 2/3D what the mind may perceive when stimulated sufficiently to simulate a photorealistic fourspace and its phenomena.

To some extent, I can already visualize somewhat accurately the appearance of a variety of 4D objects -- this is what I use to construct many of the so-called CRF polytopes (convex regular-faced polytopes, that is, the 4D analogue of Johnson solids, where the polytope's facets are Platonic, Archimedean, or Johnson solids) that I discovered. Often, I already see in my mind's eye what they look like before receiving any confirmation from actual renderings by my polytope viewer -- because that's what I use to guide my construction of the 4D models needed to make these renderings in the first place! And I can say that no matter how cleverly you render those 2D images, they are woefully inadequate for conveying the full appearance of the object in its native 4D space (as seen by a 3D retina). These images not only fail to convey the full 3D glory of the images projected on the 3D retina (esp. when the object is complex); they also frequently give you a false sense of assurance that you are perceiving 4D, when in fact you're unconsciously still interpreting them in a 3D-centric way.

As I've said before, true 4D visualization requires one to actively re-interpret elements in these images in a 4D-centric way, and to fill in the gaps in the mind's eye. One has to simultaneously utilize our innate 3D depth perception to accurately understand the 3D model the images depict, yet suppress the inclination to identify the 3D model thus constructed with the 4D object that it is meant to depict. Before identification can take place, one must infer 4D depth from the perceived 3D model. This can only happen in the mind's eye, and requires active effort because it's not instinctive, unlike our innate 3D depth inference, and it requires training and much deliberate practice. Expecting a set of images on the screen to somehow, magically, passively make you suddenly "see 4D", is a pipe dream. If the user does not actively "put the 4D into the image", then no amount of clever representations is going to make him perceive 4D, just as if you stare at the diagrams in an engineering blueprint without "putting the 3D into them", you will never truly "see" the 3D structure being thus depicted. All you will see is a flat diagram of the front of a house, say, and you will miss the fact that there is actually a protruding porch in front of the door. You may understand that the porch is there from looking at a side-view diagram of it, but until you incorporate that information into the front-view diagram -- by putting the 3D depth into the porch and the walls behind it -- you will not be able to perceive the 3D shape of the house from that viewpoint.

One interesting thought experiment along the same lines, is to consider the distinction between the layers of paint on the canvas of a painting, vs. the object depicted by the painting. When you look at, say, the Mona Lisa, what you physically see is nothing more than a bunch of layers of variously-colored paint on a canvas. These layers of paint, physically speaking, are flat on the canvas, with basically no 3D depth. Yet from these flat layers of paint your mind is able to perceive a 3D person's face, with 3D curvatures. So are you looking at Mona Lisa, or are you looking at some arbitary splotches of paint? Physically speaking, you're merely looking at splotches of paint. Mona Lisa isn't really there -- she's a figment of your brain's invention! Yet psychologically speaking, surely our "fictitious" perception of a person's face through these layers of paint matches what the painter saw in his mind's eye (or perhaps physical eyes) when he made this painting. He has somehow managed to transmit the 3D face he wishes to depict through the deficient medium of a 2D canvas to us, so that we also perceive the same 3D face. This feat is possible because our brain has "put the 3D in" -- it has added 3D depth where physically there is none. Coming back to 4D visualization, before we can perceive 4D we first need to perceive the "paint". But now this paint is itself a 3D construct, which we can't, at this time, depict on screen. So the images on the screen are actually only a painting of the paint, not a depiction of the target 4D object! First we have to "put the 3D" into the images in order to see the 3D paint. But that only lets us "see the flat layers of paint in the Mona Lisa", so to speak; we have not yet seen Mona Lisa herself. We have to go another step and "put the 4D" into the 3D paint we perceive, before we have any chance of perceiving the 4D object.

So then your counter rests upon the question of whether it is possible to "transmit 3D paint" directly to the brain, or we have to use an additional layer of indirection, by "making a painting of the 3D paint" as 2D images, and then require the brain to infer the two layers of depth before it can recover the original 4D model. Whatever the answer may be, one thing is clear: once you have successfully transmitted the "3D paint" to the brain, whatever the means, there is still the need to add the 4D depth back into it. Otherwise, you have only managed to convey a 3D object (that happens to look like a photograph of a 4D object, but nevertheless only a 3D object) to the user, and the user has not seen 4D at all!
quickfur
Pentonian
 
Posts: 2435
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Previous

Return to Visualization

Who is online

Users browsing this forum: No registered users and 1 guest