Adanaxis Rendering Demo

Discuss interdimensional programming, Java applets and so forth.

Adanaxis Rendering Demo

Postby southa » Fri Aug 04, 2006 8:39 am

Hi all,

I've just relased another demo of my 4D shooter. This one includes pseudo-4D texturing, using 2D textures mapped to each 2D facet, and a 4D noise function to lookup palette values used in those textures. It's quite pretty (screenshot), and downloadable for Mac OS X, Windows and Linux.

The demo is is scriptable to some extent. The 4D meshes are generated using an extrusion technique. It's limited but it does the job for the moment. There's a guide to creating objects too. If you choose New Game using shift-enter, it'll expose the river demo. This is an attempt at showing that you can walk around a river in 4D, although it isn't great and does cosh your graphics card a bit.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Fri Aug 04, 2006 10:22 am

Hey, cool sound! But I am actually dont understand what i am doing in this game. For example if I am aiming in third angle, I would suspect a transformation of the picture and not popping objects in and out.

So I dont exactly what 4d view you are using. Let us take the analogue of 3d/2d. For example a flight simulator projects 3d to 2d computerscreen, now suppose a 2d being living at the computer screen, located at the lower edge and playing the flight simulator: If I navigate up/down, the 2d being would see a forward/ backward. If I navigate right/left, the 2d being would see a left/right. So each change of direction would result nearly in a shift of the space and not in popping in and out of objects.

This would be different if you use the slice approach.

So maybe you can drop some words about the basic approach of the game. (Fortunately this demo works on my windowsxp system, where the tesseract trainer did not, lacking text if I remember right.)
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Fri Aug 04, 2006 11:34 am

Glad it works! Previously there were a few dodgy pointers being passed to OpenGL :oops: I think I've got them all now.

The popping in and out is caused by the method I've used to deal with the hidden axis. There's a problem with rendering 4D environments that you don't have when rendering 4D objects. In any reasonable projection to a 2D screen that I can think of, you'll lose one axis. This I call the hidden axis and is z in the demo (in eye coordinates) and the projection just throws it away. As you're looking in -w, an object right in front of you could be at (0,0,0,-1). Say there's another object at (0,0,10000,-1). Since the projection discards z, that appears at the same place and the same size as the first object despite the fact that it's a long way away.

There are a few ways to deal with this. One is to combine z with depth w so that the object at least appears a long way away. That would probably work from a gaming point of view but doesn't quite fit with the pinhole camera analogy (because z is an axis on the retina and nothing to do with depth).

The method I've chosen is to clip the z axis. The x and y axes are already clipped, because objects with large x or y (compared to w) are off the screen. The hidden axis clipping does the same for z. To prevent hard edges appearing where objects are clipped, the renderer fades them gradually as they move away from z=0.

You see a bit of the transformation as you look away from something in the hidden axis. This is a rotation in zw, so w diminishes as it's rotated into z, and the object appears to get a bit closer before it fades away. You could see a zw rotation without fading out by moving in z (E and D keys) whilst using the mouse (hold button down and drag left/right) for a zw rotation in the opposite direction so that you keep looking at the object.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Fri Aug 04, 2006 12:45 pm

So if I got you right, I can call it the "thick slice" approach.
I.e. you take the x,y,w slice at z0, but you overlay fading slices some length around z0. Is this right?

The 3d/2d way of compensating the lacking dimension is though different. It is how we paint pictures.
First we dont take an orthogonal projection (let us also do the projection along z) so more far objects in z direction appear smaller. Second we overpaint nearer objects over farther objects.
This kind of painting picture can also be done for 4d/3d. First projecting the landscape to the retina of a tetronians eye, and then overpaint (in the resulting 3d) nearer objects over farther objects. If we use mesh objects (that we can see the hole 3d space) this overpainting is a kind of cut-out. Somewhere I had already made a picture of it ... here.

So if you use the (thick) slicing approach, how about putting a 4d torus into the scene. We currently discuss this at Slicing toratopes with hyperplanes. And with your demo, the guys Marek14 and Rob are happily freed from their work to write an interactive slicing program, because your demo would it be already! And even with differently inclined slicing-hyperplanes!
And if you would add all the other toratopes as well ...
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Fri Aug 04, 2006 1:30 pm

The thick slice idea is close. It's more like a 4D version of OpenGL's viewing frustum in 3D, i.e. a pyramid with the top chopped off. There's no actual slicing going on - each pixel is more like a integral through a path in z which is approximated (very roughly) using 2D textures. I think perspective means that the fading slices would be of constant z/w, i.e. all going through the origin in eye coordinates.

I think I see how the painting idea works. So if there were three small objects at (0,0,-1,-10), (0,0,0,-10) and (0,0,1,-10), would one object overpaint the others? The Adanaxis renderer would blend the contributions from each object.

I don't think I can do a torus right now. Are we talking about the path swept out by a 3D sphere taken round in a circle, whilst being kept perpendicular to the direction of travel? The scripts can do a cube taken round in a circle.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Fri Aug 04, 2006 2:25 pm

southa wrote: I think perspective means that the fading slices would be of constant z/w, i.e. all going through the origin in eye coordinates.

The x,y,w projected sizes are x/z,y/z and w/z up to a constant factor (if the tetronians eye is positioned at (0,0,0,0) looking into z direction).
I think I see how the painting idea works. So if there were three small objects at (0,0,-1,-10), (0,0,0,-10) and (0,0,1,-10), would one object overpaint the others?

Mhm, the objects would have all to be in front of the tetronians eye. The location of the tetronians eye become important when using perspective instead of parallel projection. So lets the objects be located at (0,0,5,-10), (0,0,10,-10) and (0,0,15,-10) and the tetronians eye at (0,0,-1,-10) looking into z direction, then the first object would be bigger and so completly overpaint the other two. Just as you cant see anything with the hand in front of your eyes. If we would locate the tetronians eye for example in (0,0,-1,0) then the first object is a bit more translated into the w direction than the second and so the second object can become partly or completely - depending on size and z distance - visible (it will be pulled out in -w direction of the inner of the first object).
The Adanaxis renderer would blend the contributions from each object.

Yes, the depth information presented as fadedness. Where one looks into z and -z direction at the same time ...
I don't think I can do a torus right now. Are we talking about the path swept out by a 3D sphere taken round in a circle, whilst being kept perpendicular to the direction of travel? The scripts can do a cube taken round in a circle.

Hm, I think this is it. Marek? I mean you can approximate the sphere by some polyeder. Even with a cube it should then look similar to the pictures in the toratopes with hyperplanes-thread. Can you verify this?

But I have some other questions. The Adanaxis looks quite professional so I am asking myself whether you have already experience with (shooter) game development. I think it also costs a lot of time to develop it, so how do you earn your living?
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Fri Aug 04, 2006 5:26 pm

I think we have a wire crossed here. I forgot to say that the observer is looking in the -w direction, so anything with negative w is in front of the observer. z is the hidden axis which doesn't map to a specific position or shrinking on the screen.

I have a bit of experience in games but nothing that you'd call professional :) There's a bit of my old stuff here. I earn my keep through short term software contracts, usually using embedded C. This radio has some of my software running in it. This give me a bit of free time to work on my own stuff. The Adanaxis demo and Tesseract Trainer have taken maybe eight months, although the underlying code took a few years (on and off) to build up.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Fri Aug 04, 2006 7:48 pm

southa wrote:I think we have a wire crossed here. I forgot to say that the observer is looking in the -w direction, so anything with negative w is in front of the observer. z is the hidden axis which doesn't map to a specific position or shrinking on the screen.


You didnt forget it, you mentioned it in a previous post, I was well aware of it. But see, its all the same as in 3d/2d (take my flightsimulator example) first you need the position of the tetronians eye in 4d. This is the pilots eye in 3d example. Then you project on its retina. This is done is by a pinhole camera. Gemoetrically you span a 3d-canvas before the eye, take the lines between a 4d point and the eye, and the cut with the 3d-canvas is the projection of that point. Taking all 4d points you get the projected 3d-scene. Thatswhy I said in the previous I would locate the tetronians eye at (0,0,-1,-10) looking into z-direction because then we can use x,y,w as that canvas (spanned in distance 1 before the eye). In this way we replace your 4d parallel projection by a 4d perspective projection.
So and now we look at this projected 3d scene from some point (i.e. (0,0,0) into -w direction in Adanaxis). In the flightsimulator this is equivalent to the 2d being looking at the projected 2d-screen, it has to sit somewhere and look into some direction. And either at this stage we have an 3d-equipment to look at the 3d scene (which would really be a thing when you could built it into adanaxis, as with shutter glasses or so.) or we again project it perspectively down to a 2d view.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Sat Aug 05, 2006 11:20 am

Ah, so you're talking about a double projection - 4D to 3D then 3D to 2D? Does this hide things from the viewer that the tetronian can see? I can imagine things on the tetronian's retina that get hidden because they're behind other objects in the 3D to 2D projection. In terms of this scheme, the Adanaxis 3D to 2D transformation is a non-perspective blend without depth hiding.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby Nick » Sat Aug 05, 2006 11:41 am

OK, I don't get it either. Can you put it simply? Like, your screen has x, y, and there's a depth of x, and the hidden axis (controlled by ______ keys) is z?

It's very confusing :?
I am the Nick formerly known as irockyou.
postcount++;
"All evidence of truth comes only from the senses" - Friedrich Nietzsche

Image
Nick
Tetronian
 
Posts: 841
Joined: Sun Feb 19, 2006 8:47 pm
Location: New Jersey, USA

Postby bo198214 » Sat Aug 05, 2006 12:00 pm

southa wrote:Ah, so you're talking about a double projection - 4D to 3D then 3D to 2D?

yes. But you also do a double projection. I.e. a parallel projection to 3d (without occlusion culling, but with a type of depth cueuing, which means that near objects are brighter than far objects) and a perspective (I guess) projection to 2d.
Does this hide things from the viewer that the tetronian can see?

It depends how transparent your 3d objects are. For example if you use wire frame you can see all objects in 3d space. I never experimented with real transparence of the 3d objects (i.e. that they are like glass or steam, such that the brightness of the behind objects is bit lessened). Though that might be an option too. A tetronian can see all those objects in the projected 3d space at once. Like we can see a picture at once.
In terms of this scheme, the Adanaxis 3D to 2D transformation is a non-perspective blend without depth hiding.

You mean the 4d-2d transformation? The 3d-2d is actually perspective as far as I can see (objects become bigger when moving towards).
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Sat Aug 05, 2006 1:21 pm

If you divide the Adanaxis projection into two (which you can), you first have a perspective transformation from 4D to 3D. This corresponds to light rays going through the tetronians eye and on to its retina. It's this transformation that makes objects appear smaller as they get further away. There's an approximation of occlusion in this transformation, as there's a depth sort on distance-from-eye that determines the rendering order for the objects. There's no fading in this step.

The second transformation is from 3D to 2D and projects the retina onto the screen. x and y map directly to the screen, and entire lines in z are mapped to single pixels. There is no perspective or occlusion in this one; just a bit of colour hinting to show where things are in z. It's this stage that introduces the fading because pixels towards the edges of the retina in z make less of a contribution, and this is done purely to make the display look nice. There are some pictures in Projection and Rendering at Mushware.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Sun Aug 06, 2006 4:42 pm

Ok, I see now. Two things were a bit confusing:
1. Shooter like games and also flight simulators uses always perspective projection 3d->2d. For example if you have 3d equipment (and that would be indeed the best approach to look into 4d, instead to tear it 2 dimensions down), its also uses perspective projection for each of your eyes. So if you would adopt 3d equipment to your game then the appearence would completely change (especially no more popping in and out of objects because the HAC function wouldnt be used at all).
2. I had supposed that the forward direction would be plain old z, i.e. the 3rd coordinate of the 3d projected space. Thatswhy I assumed w being the 3rd coordinate of the 3d projected space.

So when does the 4d depth sorting of the objects come into play? Is it the rendering order for the 2d drawing?
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Mon Aug 07, 2006 9:51 am

I think you still need HAC (or something else) to solve the (0,0,10000,-1) problem mentioned earlier. For a 3D stereo image, I would render two images just as they are right now, but with viewpoints displaced slightly in x. Seems to work in Tesseract Trainer.

Yes, -w is the forward direction. I wanted to keep the retina axes together but it's an arbitrary choice. Incidentally I never did work out whether there was a 4D analogue to left and right-handed coordinates.

The depth sorting is done in w, so that nearer objects appear to be in front of distant ones. Yep, it does determine the rendering order in 2D.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Mon Aug 07, 2006 11:17 am

Hey if you want to properly use 3d equipment, you must use perspective projection. Otherwise you get something, what is somehow stereo but its not clear what it is. If we then use perspective projection, the next question is how to deal with the opaqueness of the 3d space (i mean this is our handicap to not transparently see the 3d space).

The most transparent result yields wireframe. But its only wireframe. For non-wireframe we can use HAC over the eye ray (though I really have no experience how suitable HAC is for a stereo image) or slight opaqueness of the material points (i.e. not taking an integral over the eye ray, but lessen the intensity of behind points by before points), which I can imagine (it looks like the scene is built of glass) and would guess its more useful than HAC. Non-wireframe can besides come in two flavors, a) only the 2d facets are material, b) the inner of polyhedrons is also material.

Perhaps one should also take a look at the software for professions that really rely on seeing the whole 3d space. Engineering (CAD) and medicine (CRT) come into mind.

PS: I dont understand how the depth order makes a difference if you use HAC. Isnt then simply the sum/integral taken, which is independent of the order?
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Mon Aug 07, 2006 12:06 pm

There's already a perspecive projection in place, in the 4D to 3D transformation. In many ways discard-z is simple - it's like throwing away the z coordinate and viewing the resulting (x, y, w) coordinates as a normal 3D scene.

PS: I dont understand how the depth order makes a difference if you use HAC. Isnt then simply the sum/integral taken, which is independent of the order?

Depth sorting is done using w - effectively the distance between the viewer and the object. HAC applies to z, the hidden axis.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Mon Aug 07, 2006 12:41 pm

southa wrote:There's already a perspecive projection in place

So?

In many ways discard-z is simple - it's like throwing away the z coordinate and viewing the resulting (x, y, w) coordinates as a normal 3D scene.

Some posts before you argumented that throwing away z is *not* the 4D->3D transformation, so what now? To stero look at a 3d scene you have to use perspective projection, thats for sure. But you told me that 4D->3D would be perspective. It is not the same to first parallel project (discarding one axis) and then perspective project, compared to first perspective project and then parallel project.

Depth sorting is done using w - effectively the distance between the viewer and the object. HAC applies to z, the hidden axis.


Yes but what influence has a sorting on a HAC projection, none I would guess. But maybe I dont understand the HAC projection at all, I thought that the brightness of each point on the screen is computed by the weighted sum of the brightness of the points on the corresponding ray to the eye (or in your case the ray perpendicular to the screen at that point). But the weighted sum does not rely on any order of the objects. Thats what I dont get.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Mon Aug 07, 2006 2:11 pm

It is not the same to first parallel project (discarding one axis) and then perspective project, compared to first perspective project and then parallel project.

Isn't it?

Perspective then parallel:
(x,y,z,w) -> (x/w, y/w, z/w) -> (x/w, y/w)

Parallel then perspective:
(x, y, z, w) -> (x, y, w) -> (x/w, y/w)

I thought that the brightness of each point on the screen is computed by the weighted sum of the brightness of the points on the corresponding ray to the eye (or in your case the ray perpendicular to the screen at that point).

Ah, that's not it. It's not integrating along the path of the light ray in 4D, it's integrating along a path through the retina. Along the ray in 4D, normal occlusion operates, and that's what the depth sorting (approximately) does. The HAC operates on the retina, and combines all of pixels on a line in the retina into one screen pixel.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Mon Aug 07, 2006 3:47 pm

southa wrote:Perspective then parallel:
(x,y,z,w) -> (x/w, y/w, z/w) -> (x/w, y/w)

Parallel then perspective:
(x, y, z, w) -> (x, y, w) -> (x/w, y/w)

There are two projection axes. In this case z and w, I assumed to do the projection in both cases along the same axes:
perspective->parallel: (x,y,z,w) -><sup>w</sup> (x/w,y/w,z/w) -><sup>z</sup> (x/w,y/w)
parallel->perspective: (x,y,z,w) -><sup>w</sup> (x,y,z) -><sup>z</sup> (x/z,y/z)
Along the ray in 4D, normal occlusion operates, and that's what the depth sorting (approximately) does.

Depth sorting does nothing at this point. You telled it is used in the 3d->2d projection.
The HAC operates on the retina, and combines all of pixels on a line in the retina into one screen pixel.

The (perspective) retina is projected to the screen by HAC, I dont see any ordering of objects used thereby. Each point of the screen is just painted once.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Tue Aug 08, 2006 10:47 am

We seem to be getting further apart, I disagree with all of that! :wink:

I can see that some sort of glitzy animated visualisation is in order. Leave it with me.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Tue Aug 08, 2006 11:58 am

By some reason there was anyway an (quite relativating) editing of my previous post lost. But I guess now its anyway to late.

Can you despite explain how HAC and order goes together I didnt understand it yet.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby bo198214 » Thu Aug 10, 2006 11:40 am

Another try. You are right with your exchanging of 4d->3d and 3d->2d. So that I was wondering a bit why you invalidated my original view of 4d->3d being parallel projection with HAC and 3d->2d being usual perspective projection. I mean this way there is no problem with extending to stereo viewing and its a quite valid 4d->3d projection.

So maybe I argued enough with you, and would really like to know what other people think about the Adanaxis Demo (Appeal! Appeal!).
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Fri Aug 11, 2006 2:31 pm

Sorry, been a bit busy. I use perspective for 4D to 3D because it's a model of the 'real' 4D world. It models the path of light rays onto the tetronian's retina, and that ray model gives a fairly clear way to determine what occludes what. The depth sorting serves only to approximate this occlusion.

The 3D to 2D step exits because the viewer doesn't have a 4D display device. There is no concept of depth, perspective or occlusion in this projection. I wouldn't want a perspective/depth projection at this stage because the depth in that projection wouldn't correspond to depth in the 'real' world. This projection discards one axis, and HAC is applied to limit the range of view in this axis.

The demo did get one review at MacGameFiles.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Fri Aug 11, 2006 3:12 pm

*Sigh* I mean you proved before that this is the same as I described (i.e. that 4d->3d and 3d->2d can be swapped) , and still didnt show where the order makes a difference. :evil: And I then anyway dont understand, why you model the "real" 4d world but fail to model the real 3d world (i.e. using perspective projection for 3d->2d).
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Sun Aug 13, 2006 9:56 am

There is no real 3D world to model - the tetronian's retina is wired straight to it's brain. Nothing looks further away because it's displaced in the retina axes x, y and z. That's what I've tried to reproduce.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby bo198214 » Sun Aug 13, 2006 6:39 pm

The 3d world *is* the tetronians retina. The question is then about how to present this a 3d being. To do that there are established techniques available. *gives up asking for the application of the rendering order, since it was 3 times not answered*
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Sun Aug 13, 2006 9:25 pm

The 3d world *is* the tetronians retina. The question is then about how to present this a 3d being. To do that there are established techniques available.

So this looks like the nub of our disagreement. You appear to treat the retina as a 3D object in a 3D world that's viewed by an external observer. I won't do that in my application because I believe it would create a false impression of depth, i.e. one unrelated to the distance from 4D viewer to 4D object.
*gives up asking for the application of the rendering order, since it was 3 times not answered*

Objects are rendered in back to front order based on their 4D distance from the 4D observer. That's it.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Postby quickfur » Sun Aug 13, 2006 11:04 pm

southa wrote:
The 3d world *is* the tetronians retina. The question is then about how to present this a 3d being. To do that there are established techniques available.

So this looks like the nub of our disagreement. You appear to treat the retina as a 3D object in a 3D world that's viewed by an external observer. I won't do that in my application because I believe it would create a false impression of depth, i.e. one unrelated to the distance from 4D viewer to 4D object.

Projecting from 4D to 3D is the easy step. I think we all agree that we want perspective projection from 4D to 3D with occlusion (e.g., shouldn't see through walls). This projection is done from the 4D viewpoint.

The hard part is how to represent the resulting 3D image on the 2D screen. The main problem is that we don't have 3D retinas, and there is no way of directly conveying the information in the 3D retina to our brains. The solution I prefer is to introduce a second viewpoint which is unrelated to the 4D viewpoint, the sole purpose of which is to allow us poor 3D beings look into the contents of the 3D retina. We project the contents of the retina onto the screen using this second viewpoint---and it matters little whether it's perspective or orthogonal, but the point is that it must be independent of the 4D viewpoint.

Why? Because if you force the second projection to happen orthogonally to the 4D viewpoint, you get a lot of illusions. For example, if you look down a cubical passage in 4D, the projected image has the 4 side walls as frustums (analogous to the side walls of the cube-within-a-cube projection of the 4-cube). But now you project to 2D parallel to the coordinate axes, and suddenly you can't see 2 of the frustums anymore because their opposite edges coincide. What you really want is to project to 2D using another viewpoint (which resides in 3D), that looks at the retina from an angle, so that you can see the 6 frustum volumes in the 3D retina separately. The cubical wall at the end of the corridor will be much more obviously a cube; if you had projected from an orthogonal viewpoint, it simply becomes a square, and you have absolutely no information about whether it's a square, a cube, or a cuboid in the 4D view.

Also, the projection from 3D to 2D must not have any occlusion, because otherwise you lose the most important parts of the 3D image. Think about when you play a 3D shooter and you're looking at the screen. The part of the screen you're focusing on is around the center. Now if you project to 1D (for the benefit of a hypothetical 2D player), and you have occlusion, then the poor 2D player only ever sees the peripheral edges of the screen and has no idea what's going on in the most important part of the view, which lies at the center of the screen. So in projecting from the 3D retina to the 2D screen, you must make things transparent so that you can see what's in the central area of the 3D retina.

Take a look at the 4D maze game sometime. That's the kind of projection that I'm talking about. Now, granted, it uses edge-rendering, which really is suboptimal, because edges barely begin to convey the volumes that are needed to fully render a 4D scene. (Think about how 3D rendering works: the surfaces of 3D objects are rendered as 2D polygonal regions. Similarly, 4D rendering draws the surfaces of 4D objects as 3D volumes. Using lines to render 4D is like a 3D shooter that only draws the vertices of the polygons and expects you to interpolate where the edges are.) But at least it gives you a sensible projection. What's really needed is to use transparent surfaces instead of lines, so that you can see the shapes of the projected volumes.
quickfur
Pentonian
 
Posts: 2935
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Postby bo198214 » Mon Aug 14, 2006 8:26 am

Thats what I am saying all the time! Completely agreeing with quickfur, though some comments:

quickfur wrote:Projecting from 4D to 3D is the easy step. I think we all agree that we want perspective projection from 4D to 3D with occlusion (e.g., shouldn't see through walls). This projection is done from the 4D viewpoint.

1. For certain purposes it may also be appropriate to use cavalier perspective (which is funnily no perspective projection) or other projections used in 3d technical drawings, confered to 4D. For example if we want to construct a 4D machine *lol*.
2. 4d Occlusion culling for wireframe is though difficult, I programmed it a week ago and never saw it in any 4d application. (Of course I dont talk about occlusion culling for viewing a single Polytope, where it is quite easy by the normals of the facets.)

What's really needed is to use transparent surfaces instead of lines, so that you can see the shapes of the projected volumes.

Is this anywhere already done? I mean it is *one* idea, though the only (non-wireframe) one that came to my mind.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby southa » Mon Aug 14, 2006 9:05 am

Why? Because if you force the second projection to happen orthogonally to the 4D viewpoint, you get a lot of illusions. For example, if you look down a cubical passage in 4D, the projected image has the 4 side walls as frustums (analogous to the side walls of the cube-within-a-cube projection of the 4-cube). But now you project to 2D parallel to the coordinate axes, and suddenly you can't see 2 of the frustums anymore because their opposite edges coincide. What you really want is to project to 2D using another viewpoint (which resides in 3D), that looks at the retina from an angle, so that you can see the 6 frustum volumes in the 3D retina separately. The cubical wall at the end of the corridor will be much more obviously a cube; if you had projected from an orthogonal viewpoint, it simply becomes a square, and you have absolutely no information about whether it's a square, a cube, or a cuboid in the 4D view.

That's valid if you constrain the orientation of the viewer, but if you allow the viewer to rotate by any angle in all six planes, they can effectively move your viewpoint for the 3D retina to wherever they like. So no matter what viewpoint you pick, the tetronian can reorientate themselves so that the illusions reappear. Put another way, there are viewer rotations that have the same effect as the arrow keys in the 4D Maze Game - you can recreate the illusion view with those too.
southa
Dionian
 
Posts: 20
Joined: Sat Jun 11, 2005 10:28 pm
Location: Cambridge, UK

Next

Return to Programming

Who is online

Users browsing this forum: No registered users and 8 guests

cron