17 posts
• Page **1** of **1**

After almost three years of nothing new, I made a few additions to my raytracer and re-released it.

The additions this time are the ability to color the k-dimensional subfacets of a convex hull or coxeter polytope. So, for example, here are some 3-dimensional polytopes with vertexes colored pink, edges colored dark grey, and faces colored somewhat transparently.

And, here is a (small) stereo-pair of 24-cells. One of them is solidly colored. The other has pink 0-facets (vertexes), dark grey 1-facets (edges), transparent yellow-orangish 2-facets (faces), and transparent green 3-facets (facets).

The additions this time are the ability to color the k-dimensional subfacets of a convex hull or coxeter polytope. So, for example, here are some 3-dimensional polytopes with vertexes colored pink, edges colored dark grey, and faces colored somewhat transparently.

And, here is a (small) stereo-pair of 24-cells. One of them is solidly colored. The other has pink 0-facets (vertexes), dark grey 1-facets (edges), transparent yellow-orangish 2-facets (faces), and transparent green 3-facets (facets).

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

- bo198214
- Tetronian
**Posts:**690**Joined:**Tue Dec 06, 2005 11:03 pm**Location:**Berlin - Germany

No, it doesn't use Povray. I had kicked around the idea of just making a front-end to Povray, but it would have been much, much messier to code.

For example, take the quadratic surface x^2 + 5 y^2 - z^2 + w^2 - 7 v^2 + 5 xy - 10 xv - 5 zw - 3 wv - 20. Now, rotate it a bit parallel to the w-v plane, then parallel to the z-w plane. Now, what is its intersection with the 3-space w = v = 0? And, it'd be especially nice if you could export it to Povray as a quadratic surface instead of a mesh, then it has infinite resolution.

So, no, I don't use Povray. I started from the ground up with n-dimensional vectors and matrices.

For things higher than three-dimensional scenes (higher than two-dimensional retinas), you have the choice of seeing "slices" either tiled into the same image or getting a sequence of images out. As an example, here is a tiled view of the 600-cell:

http://www.nklein.com/products/rt/rt2.5.2004.03.XX/600.png

My raytracer supports: halfspaces, cylinders (of which cubes and spheres are special cases), quadratic surfaces, convex hulls, regular polytopes defined by their Coxeter group, CSG union, CSG intersection, CSG complement, and extrusions of lower-dimensional shapes. I also support functional coloring of things, coloring something by raytracing into a different scene (or the same scene), transparency, reflectiveness, index-of-refraction, etc.

I'm not sure exactly what you meant by "what exactly does it do". But, I can explain how the "retina" is laid out, if you like.... After that, it's pretty much like every other raytracer. Send a ray from the "viewpoint", through a pixel in the "retina". Find the closest place that ray intersects an object in the scene. Now, check to see whether light from anywhere can get to that point by casting a ray from that point to each light. Add up the contributions of the light at that point. Also, if the surface is somewhat reflective, then cast a ray in the reflected direction to add into the color, too.

The big math involved is in deciding which direction each retina pixel corresponds to. There's also lots of math involved in checking whether the ray intersects with a particular item or not. But, that math is all pretty easy. For all of the shapes above, it's either linear or quadratic in the distance t along the ray from the ray's origin.

For example, take the quadratic surface x^2 + 5 y^2 - z^2 + w^2 - 7 v^2 + 5 xy - 10 xv - 5 zw - 3 wv - 20. Now, rotate it a bit parallel to the w-v plane, then parallel to the z-w plane. Now, what is its intersection with the 3-space w = v = 0? And, it'd be especially nice if you could export it to Povray as a quadratic surface instead of a mesh, then it has infinite resolution.

So, no, I don't use Povray. I started from the ground up with n-dimensional vectors and matrices.

For things higher than three-dimensional scenes (higher than two-dimensional retinas), you have the choice of seeing "slices" either tiled into the same image or getting a sequence of images out. As an example, here is a tiled view of the 600-cell:

http://www.nklein.com/products/rt/rt2.5.2004.03.XX/600.png

My raytracer supports: halfspaces, cylinders (of which cubes and spheres are special cases), quadratic surfaces, convex hulls, regular polytopes defined by their Coxeter group, CSG union, CSG intersection, CSG complement, and extrusions of lower-dimensional shapes. I also support functional coloring of things, coloring something by raytracing into a different scene (or the same scene), transparency, reflectiveness, index-of-refraction, etc.

I'm not sure exactly what you meant by "what exactly does it do". But, I can explain how the "retina" is laid out, if you like.... After that, it's pretty much like every other raytracer. Send a ray from the "viewpoint", through a pixel in the "retina". Find the closest place that ray intersects an object in the scene. Now, check to see whether light from anywhere can get to that point by casting a ray from that point to each light. Add up the contributions of the light at that point. Also, if the surface is somewhat reflective, then cast a ray in the reflected direction to add into the color, too.

The big math involved is in deciding which direction each retina pixel corresponds to. There's also lots of math involved in checking whether the ray intersects with a particular item or not. But, that math is all pretty easy. For all of the shapes above, it's either linear or quadratic in the distance t along the ray from the ray's origin.

Last edited by pat on Sun Mar 25, 2007 8:08 pm, edited 1 time in total.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

So if I understood that right, you can cut lots of 4d (curvy) (CSG) objects with a 3d hyperplane (and then export to povray?).

Is the raytracing 4d, i.e. do you assume a 4d observer and compute the rays in 4d?

Is the raytracing 4d, i.e. do you assume a 4d observer and compute the rays in 4d?

- bo198214
- Tetronian
**Posts:**690**Joined:**Tue Dec 06, 2005 11:03 pm**Location:**Berlin - Germany

Not quite. No Povray. I did not use Povray. I didn't want to mess with intersection of n-dimensionsal shapes and three-dimensional hyperplanes.

The raytracing is n-dimensional. If you're doing a 3-dimensional scene, then the retina is 2-dimensional and the rays are 3-dimensional. If you're doing a 2-dimensional scene, then the retina is 1-dimensional and the rays are 2-dimensional. If the scene is 6-dimensional, the retina is 5-dimensional and the rays are 6-dimensional.

The raytracer is just as capable of doing 1-dimensional, 2-dimensional, 3-dimensional, 4-dimensional, 5-dimensional, etc. as high as you care to go. Of course, you end up with a staggeringly huge image hypercube if you go up too many dimensions. So, in practice, the most that I've done is a tiny 7-dimensional picture.

If you look through the slides from the presentation that I gave with the raytracer three years ago, you will see some 7-dimensional pics thrown in there. You'll also see a picture validating that the 4-dimensional kissing number is at least 24. I'm pondering trying to use E8 to do a similar picture for the kissing number in 8-dimensions being at least 240. But, to get an eight-dimensional scene with enough resolution to see all 240 different spheres is going to take some huge number of pixels. Maybe I should start with E6 though where there are only 72 neighbors.

Actually, there is a more explicit reason that I could not use Povray. I cannot just take a slice of the scene and export it to Povray because some of the normals of surfaces (most of them, in fact) will not stay within the slice. I had, for awhile, I'm not sure where it has gotten to, a picture where there were four hyperspheres visible. You were looking directly at the origin from along the negative x-axis. There was a blue sphere at +1 on the y-axis, a green sphere at +1 on the z-axis, and a red sphere at +1 on the w-axis. The fourth sphere was reflective, and positioned at +1 on the x-axis but also shifted a tiny bit on the negative w-axis so that you could see the red sphere reflected in it. There is no way that image could work if I were slicing the scene and exporting it to a 3-dimensional raytracer. It can only work if the rays are in the full number of dimensions required for the scene.

The raytracing is n-dimensional. If you're doing a 3-dimensional scene, then the retina is 2-dimensional and the rays are 3-dimensional. If you're doing a 2-dimensional scene, then the retina is 1-dimensional and the rays are 2-dimensional. If the scene is 6-dimensional, the retina is 5-dimensional and the rays are 6-dimensional.

The raytracer is just as capable of doing 1-dimensional, 2-dimensional, 3-dimensional, 4-dimensional, 5-dimensional, etc. as high as you care to go. Of course, you end up with a staggeringly huge image hypercube if you go up too many dimensions. So, in practice, the most that I've done is a tiny 7-dimensional picture.

If you look through the slides from the presentation that I gave with the raytracer three years ago, you will see some 7-dimensional pics thrown in there. You'll also see a picture validating that the 4-dimensional kissing number is at least 24. I'm pondering trying to use E8 to do a similar picture for the kissing number in 8-dimensions being at least 240. But, to get an eight-dimensional scene with enough resolution to see all 240 different spheres is going to take some huge number of pixels. Maybe I should start with E6 though where there are only 72 neighbors.

Actually, there is a more explicit reason that I could not use Povray. I cannot just take a slice of the scene and export it to Povray because some of the normals of surfaces (most of them, in fact) will not stay within the slice. I had, for awhile, I'm not sure where it has gotten to, a picture where there were four hyperspheres visible. You were looking directly at the origin from along the negative x-axis. There was a blue sphere at +1 on the y-axis, a green sphere at +1 on the z-axis, and a red sphere at +1 on the w-axis. The fourth sphere was reflective, and positioned at +1 on the x-axis but also shifted a tiny bit on the negative w-axis so that you could see the red sphere reflected in it. There is no way that image could work if I were slicing the scene and exporting it to a 3-dimensional raytracer. It can only work if the rays are in the full number of dimensions required for the scene.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

Well, it was easy enough to recreate:

Here, there is a blue unit sphere at { 0, 2, 0, 0 }, a green unit sphere at { 0, 0, 2, 0 }, and a red unit sphere at { 0, 0, 0, 2 }. The viewpoint is from along the negative x-axis looking toward the origin. So, +x is forward, +y is left, and +z is up. So, the rays cast into this slice have w = 0. But, there is a reflective unit sphere at { 2.0761204674887133, 0, 0, -0.3826834323650898 }. It's offset a bit in the negative w-direction, so rays that hit it, bounce off into the +w direction. So, even though you cannot see the red sphere directly in the image anywhere, you can see it reflected in the center sphere.

I have to go get my son some dinner. Maybe later, I'll post a more complete rendering of that same scene.

Here, there is a blue unit sphere at { 0, 2, 0, 0 }, a green unit sphere at { 0, 0, 2, 0 }, and a red unit sphere at { 0, 0, 0, 2 }. The viewpoint is from along the negative x-axis looking toward the origin. So, +x is forward, +y is left, and +z is up. So, the rays cast into this slice have w = 0. But, there is a reflective unit sphere at { 2.0761204674887133, 0, 0, -0.3826834323650898 }. It's offset a bit in the negative w-direction, so rays that hit it, bounce off into the +w direction. So, even though you cannot see the red sphere directly in the image anywhere, you can see it reflected in the center sphere.

I have to go get my son some dinner. Maybe later, I'll post a more complete rendering of that same scene.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

Here's a more complete rendering of the scene as a stereo pair. In this image, +x is forward in each slice, +y is to the left in each slice, +z is toward the top in each slice, and +w is toward the top of the set of slices.

Once you get used to this presentation of the slices, you can see that blue and green spheres are definitely centered with w = 0. You can see that the red sphere is definitely cenetered with w = 2. And, you can see that the reflective sphere is definiitely centered somewhere in the low negative w's. The 7-th frame down should be the w = 0 one. Actually, it might be w = +2/7. Oops. I should have made 13 or 15 slices instead of 14. Regardless, you can clearly see, within a slice, reflections of things that aren't in that slice.

Once you get used to this presentation of the slices, you can see that blue and green spheres are definitely centered with w = 0. You can see that the red sphere is definitely cenetered with w = 2. And, you can see that the reflective sphere is definiitely centered somewhere in the low negative w's. The 7-th frame down should be the w = 0 one. Actually, it might be w = +2/7. Oops. I should have made 13 or 15 slices instead of 14. Regardless, you can clearly see, within a slice, reflections of things that aren't in that slice.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

pat wrote:I didn't want to mess with intersection of n-dimensionsal shapes and three-dimensional hyperplanes.

But then I understand nothing. A 4d scene becomes projected to a 3d retina, by raytracing - as you said. But (a) you present 2d objects. So there must be second projection from 3d to 2d. And usually exactly this is done by povray. So what do you do, to come from 4d to 2d? And (b) you present sequences of slices, so I guess you compute slices of 4-space via 3d-hyperplanes. That quite puzzles me.

And somehow your pictures look like 3d-raytraced pictures, thatswhy in the beginning I was doubting whether you use n-dim raytracing.

- bo198214
- Tetronian
**Posts:**690**Joined:**Tue Dec 06, 2005 11:03 pm**Location:**Berlin - Germany

The slices in these images should be stacked one atop the other to make a three dimensional cube. In the last image there, the three-dimensional retina is 73x73x14.

Certainly, I can't output an (n-1)-dimensional image for n > 3 without doing something funky. My raytracer can either tile the layers of the volume (as shown above) or output them as a sequence. In the pictures above, the ( x, y ) pixel of the z-th slice is adjacent in the retina to the ( x+1, y ), ( x-1, y ), ( x, y+1 ), and ( x, y-1 ) pixels of the z-th slice. But, it is also adjacent to the ( x, y ) pixel of the (z-1)-th and (z+1)-th slices.

And, yes... my pictures look like 3d-raytraced pictures if you only consider each slice. But, if you look at it as the collection of slices, there is real information to be gained. For example, look at the red sphere above. The part of the red sphere that you can see in the fourth slice from the top is way at the edge of the sphere. The top slice is right at the center of the sphere.

There is no projection from 3d to 2d. There is just the chopping up of the 3-d retina into something that I can present in 2-d since I can't display a 3-d array of pixels let alone a 4-d or 5-d or 6-d one.

So, no.... I don't compute slices of 4-space with 3d hyperplanes. I have an (n-1)-dimensional retina for an n-dimensional scene. I then slice the (n-1)-dimensional retina into 2-dimensional slices.

Have you ever played 3-d tic-tac-toe?

I'm doing the same disassembly of the n-dimensional retina. Here is a composite image meant to mockup what the 3-dimensional retina above would look like. Here, the retina is 512x512x14.

That image doesn't do anything for me. I supposed, it kinda shows the hypersphere shape a little bit, but you can hardly see anything. So, it's not so useful. It's a little more useful if we erase the sky, too.... but that's not the way a retina would work.

And, as I mentioned, the out-of-slice reflection in the above image definitely shows that I'm not working with a slice at a time.

...And, just so there's no confusion, the rendering at the top of the page with the cube, the dodecahedron, and icosahedron is a 3-d scene.....

Certainly, I can't output an (n-1)-dimensional image for n > 3 without doing something funky. My raytracer can either tile the layers of the volume (as shown above) or output them as a sequence. In the pictures above, the ( x, y ) pixel of the z-th slice is adjacent in the retina to the ( x+1, y ), ( x-1, y ), ( x, y+1 ), and ( x, y-1 ) pixels of the z-th slice. But, it is also adjacent to the ( x, y ) pixel of the (z-1)-th and (z+1)-th slices.

And, yes... my pictures look like 3d-raytraced pictures if you only consider each slice. But, if you look at it as the collection of slices, there is real information to be gained. For example, look at the red sphere above. The part of the red sphere that you can see in the fourth slice from the top is way at the edge of the sphere. The top slice is right at the center of the sphere.

There is no projection from 3d to 2d. There is just the chopping up of the 3-d retina into something that I can present in 2-d since I can't display a 3-d array of pixels let alone a 4-d or 5-d or 6-d one.

So, no.... I don't compute slices of 4-space with 3d hyperplanes. I have an (n-1)-dimensional retina for an n-dimensional scene. I then slice the (n-1)-dimensional retina into 2-dimensional slices.

Have you ever played 3-d tic-tac-toe?

I'm doing the same disassembly of the n-dimensional retina. Here is a composite image meant to mockup what the 3-dimensional retina above would look like. Here, the retina is 512x512x14.

That image doesn't do anything for me. I supposed, it kinda shows the hypersphere shape a little bit, but you can hardly see anything. So, it's not so useful. It's a little more useful if we erase the sky, too.... but that's not the way a retina would work.

And, as I mentioned, the out-of-slice reflection in the above image definitely shows that I'm not working with a slice at a time.

...And, just so there's no confusion, the rendering at the top of the page with the cube, the dodecahedron, and icosahedron is a 3-d scene.....

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

To beat on this some more, the conventional order that pixels are placed into raster images is LRTB... left-to-right, then top-to-bottom. (Some formats use LRBT... left-to-right, bottom-to-top).

In the vertical images above, I used LRTBAK.... left-to-right, top-to-bottom, and ana-to-kata. In the image of the 600-cell, I used LRAKTB.... left-to-right, ana-to-kata, top-to-bottom.

And, when I talk about slices, I am talking about slicing the final (n-1)-dimensional retina. I am *not* talking about slicing the scene.

In the vertical images above, I used LRTBAK.... left-to-right, top-to-bottom, and ana-to-kata. In the image of the 600-cell, I used LRAKTB.... left-to-right, ana-to-kata, top-to-bottom.

And, when I talk about slices, I am talking about slicing the final (n-1)-dimensional retina. I am *not* talking about slicing the scene.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

Hm, I find it interesting that a slice through a raytraced picture looks like the raytraced picture of a slice through the original object.

Btw, whom did you present/post your first slides about your raytracer? I think here was also mentioned a nd raytracer as diploma thesis or so. So how many nd raytracers are out there?

Btw, whom did you present/post your first slides about your raytracer? I think here was also mentioned a nd raytracer as diploma thesis or so. So how many nd raytracers are out there?

- bo198214
- Tetronian
**Posts:**690**Joined:**Tue Dec 06, 2005 11:03 pm**Location:**Berlin - Germany

I presented as part of the University of Minnesota's Mathematics Department Undergraduate Colloquium series in Spring 2004.

I have a link on my raytracer page to Steve Hollasch's thesis (I just updated the link). His raytracer *only* works in four dimensions... and it only does spheres, tetrahedrons, and parallelpipeds.

And, it appears that rather than tile the slices all horizontally or all vertically, he just assembles the slices of the retina into an mxn tile thing. So, there's not so much indication that ana/kata are like another left/right or another up/down. *shrug*

Anyhow, that raytracer is four dimensional. There is one raytracer that raytraces in Minkowski space-time. Mine does any number of spatial dimensions (but not time-like dimensions). Every other raytracer that I've ever seen is three dimensional.

I have a link on my raytracer page to Steve Hollasch's thesis (I just updated the link). His raytracer *only* works in four dimensions... and it only does spheres, tetrahedrons, and parallelpipeds.

And, it appears that rather than tile the slices all horizontally or all vertically, he just assembles the slices of the retina into an mxn tile thing. So, there's not so much indication that ana/kata are like another left/right or another up/down. *shrug*

Anyhow, that raytracer is four dimensional. There is one raytracer that raytraces in Minkowski space-time. Mine does any number of spatial dimensions (but not time-like dimensions). Every other raytracer that I've ever seen is three dimensional.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

bo198214 wrote:Hm, I find it interesting that a slice through a raytraced picture looks like the raytraced picture of a slice through the original object.

Well, it shouldn't be too surprising though. A horizontal, pixel-high slice through a photograph is essentially intersecting the scene with a plane and then drawing that. However, like with my image of the spheres here, you couldn't actually render the picture if you only had the planar slice from the scene because some of your light and shadows and reflections would show stuff that is outside that plane.

All it's really saying is that the image of an n-dimensional hyperplanar slice through the viewpoint is an (n-1)-dimensional hyperplane in the retina.

In fact, if I used the same resolution for all directions of the retina, say 100 x 100 x 100, then any planar slice through it at any angle would essentially look like a raytrace of a three-dimensional slice of the scene.

Maybe I'll throw together some code to take a 100 x 100 x 100 cube and extract a slice at an arbitrary angle. Or, maybe I'll just download one of the datavis packages that can already do that kind of thing.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

pat wrote:bo198214 wrote:Hm, I find it interesting that a slice through a raytraced picture looks like the raytraced picture of a slice through the original object.

Well, it shouldn't be too surprising though. A horizontal, pixel-high slice through a photograph is essentially intersecting the scene with a plane and then drawing that.

I mean I am not a raytracing expert but from the lighter and darker facets of your objects I would guess, there is somewhere a light source and the light source is somewhere in 4d and probably not on the cut/slice, you also use mirrors. So I indeed had expected that there are some (possibly minor) differences in the lighting or so ...

However, like with my image of the spheres here, you couldn't actually render the picture if you only had the planar slice from the scene because some of your light and shadows and reflections would show stuff that is outside that plane.

Exactly, can you provide a picture that can not be the raytraced image of a 3d scene? Or is one of the spheres pictures already such a picture?

- bo198214
- Tetronian
**Posts:**690**Joined:**Tue Dec 06, 2005 11:03 pm**Location:**Berlin - Germany

The sphere picture cannot be the rendering of a 3-d scene. Well, actually, you could make a scene that looks pretty much like it by putting the red ball behind the camera. But, the red ball is in front of the camera, but can only be seen by the out-of-the-current-three-space reflection.

And, yes, the lighting is also not in the current-three-space in most of the frames, too. The lights in that scene are at: { -10, 5, 15, 0 } and { -15, -5, 5, 0 }. So, all of the light hitting things outside the center slice is coming from the center slice.

In this picture, you can see some effects that are also impossible in a 3-d scene. This is a 5-dimensional rendering of a human-ish figure in front of a mirror and under a spotlight.

If you look at the top-middle frame, you will see that in the mirror, there is a shadow in the reflection of something that is not reflected in the mirror. And, if you look on the ground up and to the left of the person, you will see shadows of his legs which do not appear in that frame. Also, in most of the frames, you will not see the person's reflection.

And, yes, the lighting is also not in the current-three-space in most of the frames, too. The lights in that scene are at: { -10, 5, 15, 0 } and { -15, -5, 5, 0 }. So, all of the light hitting things outside the center slice is coming from the center slice.

In this picture, you can see some effects that are also impossible in a 3-d scene. This is a 5-dimensional rendering of a human-ish figure in front of a mirror and under a spotlight.

If you look at the top-middle frame, you will see that in the mirror, there is a shadow in the reflection of something that is not reflected in the mirror. And, if you look on the ground up and to the left of the person, you will see shadows of his legs which do not appear in that frame. Also, in most of the frames, you will not see the person's reflection.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

Here's an image specifically constructed so that shadows from thing outside of the slice are visible within the slice.

There are two lights... one is in +w the other in -w. Both point back toward the origin. There is a cube in +w and a sphere in -w. You can see the shadow of the cube in the lower frames and the shadow of the sphere in the upper frames. And, in the center frame, you can see the shadows of both, but neither object is in that frame.

There are two lights... one is in +w the other in -w. Both point back toward the origin. There is a cube in +w and a sphere in -w. You can see the shadow of the cube in the lower frames and the shadow of the sphere in the upper frames. And, in the center frame, you can see the shadows of both, but neither object is in that frame.

- pat
- Tetronian
**Posts:**563**Joined:**Tue Dec 02, 2003 5:30 pm**Location:**Minneapolis, MN

Woof, the thing with the torso is quite surreal, ghost like.

The missing reflection (perhaps because it is a superdimensional ghost or vampire ) then already convinced me.

(Otherwise an n-dim raytracer would not make much sense, merely a 3d traycer plus a slicer would be needed.)

The missing reflection (perhaps because it is a superdimensional ghost or vampire ) then already convinced me.

(Otherwise an n-dim raytracer would not make much sense, merely a 3d traycer plus a slicer would be needed.)

- bo198214
- Tetronian
**Posts:**690**Joined:**Tue Dec 06, 2005 11:03 pm**Location:**Berlin - Germany

17 posts
• Page **1** of **1**

Users browsing this forum: No registered users and 1 guest