Inner and outer products in Clifford Algebra...

Higher-dimensional geometry (previously "Polyshapes").

Inner and outer products in Clifford Algebra...

Postby Paul » Wed Nov 10, 2004 2:55 pm

Hello all,

Perhaps you can field this question, Pat... one of the reasons I'm asking it here, rather than the Math Message Board, is because no one on the Math Message Board seems much interested, nor knowledgable, about Clifford, or Geometric Algebra.

I have a couple of David Hestenes books... "New Foundations...", and "Clifford Algebra to...", and a few other books about Clifford Algebra... and I've also seen sources on the Internet.

Perhaps it's a matter of understanding my background better. I have a BS in Computer Science. I wasn't a Math major, although many said I should have been. So, much of my mathematical education is self-taught,... and, for instance, I might know something traditionally taught in grad school, but not know something that every undergrad math major would know...

Basically, I'm hoping someone can show me how to form inner and outer products for arbitrary multi-vectors. Unfortunately, I don't seem to be able to take even David Hestenes explanations and apply them. I guess I need someone to show me specific examples with aribitrary multi-vectors.

Also... I know that some of the rules... like concerning commutivity (I believe?) with the wedge product... with bivectors have specifically spelt out rules... but, what happens if one's computing with trivectors? In other words, generalization,... in some easy to understand manner... is what I need.

If someone could show me how to do all the calculations... by hand. That's what I'm looking for. Specifically, by hand... such that I can apply it to arbitrary inner and outer products of multi-vectors.


Also... recently on the Math Message Board, http://www.math2.org/cgi-bin/mmb/server?action=read&msg=31392, Mark Tiefenbruck was helping me with understanding more about parametric equations. If you've read paragraph 3 above, perhaps it's easier to understand how I might have questions like this... along with questions about multi-vectors.

Anyway... The form of representing a nD-space as Mark pointed out seems like it could be very useful... except that the number of parameter variables might become quite large,... and then, trying to solve such systems of equations might be difficult. However, simply determining whether a particular point satisfies the equations still shouldn't be too hard.

What I'm wondering is here is... the multi-vector represents an arbitrarily shaped (seems to depend on who you read...) that doesn't seem necessarily to have any fixed position in nD-space. (If I understand correctly?) So, for instance, the wedge product of multi-vectors is the nD-hypervolume of an arbitrarily shaped nD-space... correct? But, a 1-vector determination of it is similar to the equation Mark provided... correct? Except Mark's formula would define the shape and the placement of the nD-space... correct?

Perhaps related to the preceding paragraph is a much more general issue... Sometimes I get a bit confused with vectors whether or not we're assuming that the tail of a vector is at the origin... I believe also know as a position vector. Again, maybe this is something a bit off... perhaps because I wasn't a Math major... but, can someone explain more about this? Is it a matter of loose convention, or are there hard-and-fast rules governing when a vector is assumed to have it's tail at the origin?

Also... the spinor is a different interpretation of the bivector... correct? Or, can spinors be of any dimension? That is, can a spinor be another interpretation of a trivector? As best as I can understand, a spinor is called a rotation operation... so, it's just considered to rotate geometric objects... is this correct?
Paul
Trionian
 
Posts: 74
Joined: Sat Sep 04, 2004 10:56 pm

Postby pat » Wed Nov 10, 2004 9:56 pm

Well, let's start out by exploring what you're looking for with Inner products. For reference, you may want to peruse MathWorld's definition of Inner Product, but I'll summarize it here.

Rather than use the bracket notation, I'm going to use ".' to represent an inner product. If you've got scalar { a } from a field R and vectors { u, v, w } from a vector space V, then an inner product is a symmetric, bilinear, positive-definite function from VxV to R. That is to say, that the following hold:
  1. u.v = v.u
  2. (u+v).w = (u.w) + (v.w)
  3. (au).v = a (u.v)
  4. (u.u) greater than zero unless u is the zero vector.
  5. (u.u) equals zero if u is the zero vector.

Point 1 is the symmetric part. Points 2 and 3 make it linear (and because it's symmetric, it also happens to be bilinear). Points 4 and 5 make it positive-definite.

All of that said, when we're dealing with vectors represented in an orthonormal basis, we almost always use the dot-product for the inner product. An orthonormal basis is a basis for the vector space such that each basis vector is unit length (the -normal part) and each basis vector is perpendicular to each other basis vector (the ortho- part). The dot-product of two such vectors (of the same dimension) is the sum of the coordinate-wise products. So, if the vectors are [ a, b, c ] and [ d, e, f ], then the dot-product would be: ad + be + cf.

Most undergrads take Linear Algebra at some point before getting into Algebra (in the Abstract, Clifford, etc. sense). In undergraduate Linear Algebra, "inner product" and "dot-product" are often used interchangeably. Technically, the "dot-product" is just one example of an "inner product". But, most undergraduate Linear Algebra classes never concern themselves with any other inner products.

For Euclidean geometry, the (usual/canonical/almost-always-used) coordinate-representation of a point is an example of a vector represented in an orthonormal basis. The basis vectors are the unit vectors in each of the dimensions... the unit vector in the x-direction, the unit vector in the y-direction, etc.

Now, all of that said, it's perfectly reasonable to try to formulate an inner product on multivectors. Given a scalar { a } from a field R and multivectors { u, v, w } from a Clifford Algebra V, we can try to formulate a symmetric, bilinear, positive-definite function from VxV to R. For example, let's take our Clifford Algebra Cl<sub>3</sub>. Our multivectors are made up of a 1-dimensional scalar portion, a 3-dimensional vector portion, a 3-dimensional bivector portion, and a 1 -dimensional trivector portion. One way we could define an inner product is to sum up the products of corresponding coefficients. So, if one of our multivectors is: a + be<sub>1</sub> + ce<sub>2</sub> + de<sub>13</sub> + fe<sub>1234</sub> and our other multivector is: g + he<sub>2</sub> + je<sub>12</sub> + ke<sub>13</sub> + me<sub>1234</sub>, then our inner product would be: ag + ch + fm.

But, the question is, what good is this inner product?

For Euclidean vectors { u, v }, the dot product of two vectors has the properties that: u.u = | u |<sup>2</sup> (where |u| represents the Euclidean length of u) and u.v = | u | | v | cos θ where θ is the angle between the two vectors. So, in the case of Euclidean vectors, the dot-product (this simplest of inner products) has a very handy geometric meaning.

For our extension of the dot-product to multivectors, what good is it? Does it really mean anything to us geometrically to say that all pure-bivectors are perpendicular to all pure-trivectors? Does it get us anywhere? I'm not sure. We can potentially use it to tell if our mutlivectors are just scalar multiples of each other... if u = av, then: (u.u) (v.v) = ( u.v )<sup>2</sup>. But, that seems a long way to go....

Now, this dot-product isn't the only possible inner product. But, the restrictions on inner products (symmetric, bilinear, positive-definite, scalar-valued) makes it tough to get anything fundamentally different than the above. We may, for example, be able to add in another set of (positive) coefficients into the mix. We could, for example, set up a different coefficient for each term in our dot-product. Whereas before we were summing the products of the corresponding coefficients, now we can sum the products of the corresponding coefficients times a corresponding (positive coefficient from somewhere else). Let me go back to R<sup>3</sup> instead of Cl<sub>3</sub> since it'll be notationally much simpler. We could define [ a, b, c ] . [ x, y, z ] to be: axi + byj + czk for some (real) positive constants i, j, and k. This would still be an inner product. But, it wouldn't be the dot product unless i = j = k = 1.

We might, for some reason, want our scalars to count twice as much as our vectors which count twice as much as our bivectors which count twice as much as our tridvectors. Or, we may, for some reason, want the e<sub>12</sub> coordinate of our bivectors to count more than the e<sub>23</sub> coordinate. But, the question is still, is this useful?

So, if the question is, can you define an inner product on multivectors. The answer is definitely yes. If the question is, can you define an inner product in a geometrically useful way, then I'm not sure.

Even in the simplest case of a scalar plus a vector, you've got some vector component and some scalar component. With an inner product, you're going to squash all of this information down into a single scalar. How much have you lost in doing so? You've lost some distinction between scalars and vectors for sure. For example, with a simple dot-product as the inner product, then the following set of multivectors:
  • 1 + [ 0, 0, 0 ]<sub>1</sub> + [ 0, 0, 0 ]<sub>2</sub> + 0<sub>3</sub>
  • 0 + [ 1, 0, 0 ]<sub>1</sub> + [ 0, 0, 0 ]<sub>2</sub> + 0<sub>3</sub>
  • 0 + [ 0, 0, 0 ]<sub>1</sub> + [ √(1/3), √(1/3), √(1/3) ]<sub>2</sub> + 0<sub>3</sub>
  • 0 + [ 0, √(1/3), √(1/3) ]<sub>1</sub> + [ 0, 0, 0 ]<sub>2</sub> + √(1/3)<sub>3</sub>

have the property that y.y = 1 and y.z = 0 if y is not equal to z. So, what geometric insight does this inner product give us? I don't know.

With the Clifford Algebras, we can define an inner product of two multivectors u and v as: ( uv + vu ) / 2 where 'uv' means the Clifford Product of u and v. But, this is actually equivalent to our dot-product above. So, if the expression ( uv + vu ) / 2 means anything to you geometrically for arbitrary multivectors, then cool....

So.... there's a big, big rant about inner products.... I should get back to work now....
pat
Tetronian
 
Posts: 563
Joined: Tue Dec 02, 2003 5:30 pm
Location: Minneapolis, MN

Where Vectors Are...

Postby pat » Wed Nov 10, 2004 11:34 pm

You asked whether all vectors originate at the origin. The answer is that technically, they probably really don't originate. Vectors are just a direction and a magnitude. It's often convenient to root them at a point. For example, while the momentum vector of an object hurtling toward the sun isn't technically rooted at the object, it is much more convenient to think of it that way. It gives a real feeling like that's where the momentum originates. But, technically, it's just a direction and a magnitude.

When you're considering the way vectors add, it is usually convenient to draw them end-to-end.... putting one rooted at the origin and the other rooted at the head of the first. This is convenient, but technically a little wonky. What it's really saying is the coordinates in Euclidean space combine linearly. But, if you're not working in nice space, this kind of thing can throw you off.

One example is the infinite dimensional space is the space of continuous functions from R to R. If you have functions { f, g } along with scalars { a, b }, then you can define addition in the normal way (for functions) so that (f+g)(x) = f(x) + g(x) and (af)(x) = a(f(x)), then this satisfies all of the requirements for a vector space. And, you can define an inner product as ∫ f(x)g(x) dx where the integral goes from a to b (with a less than b). Now, with this vector space, it's hard to imagine things in terms of direction and magnitude and laying things end to end.
pat
Tetronian
 
Posts: 563
Joined: Tue Dec 02, 2003 5:30 pm
Location: Minneapolis, MN

Parametric Equations....

Postby pat » Thu Nov 11, 2004 12:41 am

As Mark pointed out in your MathBB post, parametric equations are slightly different than your 2-D plane equation was.

Given three (non-collinear) vectors in n-dimensional space:
  • x = [ x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, ..., x<sub>n</sub> ]
  • y = [ y<sub>1</sub>, y<sub>2</sub>, y<sub>3</sub>, ..., y<sub>n</sub> ]
  • z = [ z<sub>1</sub>, z<sub>2</sub>, z<sub>3</sub>, ..., z<sub>n</sub> ]


The parametric equation: x + t( y - x ) as t goes from -infinity to +infinity defines a 1-dimensional line sitting in n-dimensional space. In this case, we are somewhat mixing where our vectors start. What we want this to be is to say that when we root the x vector at the origin and add to it multiples of ( y - x ), we get this line. It turns out, this is the same thing as saying that we root a multiple of ( y - x ) at the origin and add x. So, in one case, we're starting from a point and heading off in the direction from x to y (or the opposite direction) as if we've plopped our ruler onto both x and y. In the other case, we're heading off in the y direction and then offsetting by a fixed amount... as if we were drawing a line parallel to an edge by holding our pencil a fixed distance away from the edge as we traced the line. Anyhow, that's a big digression. I talk too much.

Anyhow, this is called a parametric equation because it takes advantage of a parameter -- 't'. We can get the same exact line with other parametric equations. For example: x + tan(t) ( y - x ) where t goes from -pi/2 to +pi/2. Or: x + ( t<sup>3</sup> - t )y. As t goes from -infinity to +infinity, we end up retracing the line in a couple of spots... jogging back and forth... (for example, this is the same point for t = -1, t = 0, and t = +1) but geometrically, it's the same line. Usually, one tries to formulate the parametric equations such that they're one-to-one. That is... if f(t<sub>1</sub>) = f(t<sub>2</sub>), then t<sub>1</sub> must equal t<sub>2</sub>. Otherwise, there's some redundancy and overlap that gets annoying when you're trying to understand what's going on.

Additionally, we could get the same line using more than one parameter. For example, that same line is x + (s + t)y as both s and t range from -infinity to +infinity. We've successfully mapped the whole st-plane down to our line. But, we often use parametric equations to try to reduce something that's sitting in n-dimensional space into something that's sitting in fewer dimensions. For example, when we "coordinatize" the Earth with lattitude and longitude, we're taking the surface of the Earth... that surface sits in 3-D space, but we've reduced the part we're interested in down to two parameters. It wouldn't really help us to throw in extra parameters. So, we try to keep the parametrization down to the minimum possible number of dimensions.

Now, on to the plane equation that you gave (forgive my change of variables): ( y - x ) . n = 0 where n = ( y - x ) cross ( z - x ). That's not a parametric equation because there are no free parameters. There are no variables we can plug in and be sure that we're in the proper plane. As it happens, the equation you gave is always true since n is always perpendicular to ( y - x ) because of the way the cross-product works. But, it also has a problem because the cross-product doesn't work unless the dimension of the space is 3 or 7. More on that in a minute....

A better equation for the plane contain the points x, y, and z would be that it's the set of all w which satisfy: ( w - x ) . n = 0 where n = ( y - x ) cross ( z - x ). This will work for three-dimensional space. But, the cross-product still breaks unless the dimension is three or seven dimensional. And, in seven dimensional space, this doesn't define a 2-D plane, it defines a 6-D hyperplane.

A simple parametric equation for the plane containing x, y, and z is: x + s ( y - x ) + t ( z - x ). This is particular convenient because it gives us linear coordinates within the plane... and the coordinates (s,t) = (0,0) correspond to the point x. Of course, it's a bit funky since we're used to thinking of all of the points (s,t) as a plane where the s and t axis are perpendicular, they wouldn't be perpendicular here unless the angle yxz was a right angle.

As before, there are many, many possible ways we could parameterize this and still get the same plane. One example worth mentioning is: x + s cos(t) ( y - x ) + s sin(t) ( z - x ). This is a sort of polar-coordinates version of the one above.

The key point is that no matter how many dimensions these vectors are, the plane defined by these equations is still only a 2-d plane.

I have more to say on this, but I have to run right now... more later....
pat
Tetronian
 
Posts: 563
Joined: Tue Dec 02, 2003 5:30 pm
Location: Minneapolis, MN

Postby Paul » Thu Nov 11, 2004 1:34 am

Hello Pat,

Thanks for your responses.

Isn't it possible to view the inner product... at least the one of the inner products strongly associated with the 'dot' product... as the projection of one n-vector onto another n-vector? This website explores a whole bunch of possible inner products, and a couple outer products... http://www.iancgbell.clara.net/maths/geoalg1.htm#A24

For instance, on p. 35 of Lounesto's book Clifford Algebras and Spinors (I think you have this book too...), Lounesto speaks about addition of bivectors. Couldn't the inner product of two bivectors be defined as the scalar quantity representing the 2d area of the second bivector projected onto the first bivector? Perhaps the parallel portion of the first bivector?

I believe one should be able to extend this geometric concept to arbitrary multi-vectors...?

But, it also has a problem because the cross-product doesn't work unless the dimension of the space is 3 or 7. More on that in a minute....


You didn't say it, but the obvious choice is to investigate using the wedge product,...? or, the scalar representing the hypervolume denoted by the wedge product? Is this where you're going?
Paul
Trionian
 
Posts: 74
Joined: Sat Sep 04, 2004 10:56 pm

Postby pat » Thu Nov 11, 2004 4:16 am

Paul wrote:Isn't it possible to view the inner product... at least the one of the inner products strongly associated with the 'dot' product... as the projection of one n-vector onto another n-vector?


If you want to know the projection of the vector x on the vector y, it is: ( (x.y) / (y.y) ) y. This is just a special application of the x.y = |x||y|cosθ along with a little bit of trig, and then multiplying by y/|y| to get the resulting vector to actually point in the same direction as y.


For instance, on p. 35 of Lounesto's book Clifford Algebras and Spinors (I think you have this book too...), Lounesto speaks about addition of bivectors. Couldn't the inner product of two bivectors be defined as the scalar quantity representing the 2d area of the second bivector projected onto the first bivector? Perhaps the parallel portion of the first bivector?


In fact, that is just doing the dot-product: the sum of the products of the corresponding coefficients. So, I suppose it has more of a geometric interpretation than I thought. Still, the dot-product of two multivectors which are each the sum of a vector and a bivector then is sorta a glommed together measure of how parallel the vectors are and how much the bivectors areas overlap.

I believe one should be able to extend this geometric concept to arbitrary multi-vectors...?


Yepper....

But, it also has a problem because the cross-product doesn't work unless the dimension of the space is 3 or 7. More on that in a minute....


You didn't say it, but the obvious choice is to investigate using the wedge product,...? or, the scalar representing the hypervolume denoted by the wedge product? Is this where you're going?


Yes, that's where I was going... and if you look at my posts with the code snippets or better a couple of posts before that, that's precisely what I was doing. I calculated the wedge product... then I flipped the sign of every-other component (an artifact of the wedge product being antisymmetric). What I did there was use the Hodge dual. I took the n vertexes on an (n-1)-hyperplane in dimension n and used the wedge product (n times) to find the (n-1)-vector that was the Hodge dual of the dot-product I was looking for. Then, I used the antisymmetricness of the wedge product to find the hyperplane's normal vector.
pat
Tetronian
 
Posts: 563
Joined: Tue Dec 02, 2003 5:30 pm
Location: Minneapolis, MN

Postby Paul » Thu Nov 11, 2004 2:12 pm

Hello Pat,

There's one thing that concerns me here about the wedge product...

If I recall correctly, the wedge product of two multi-vectors, A<sub>r</sub> and B<sub>s</sub>, of dimension r and s respectively, produces a multi-vector C<sub>r+s</sub> of dimension r+s.

Also if I recall correctly, the shape of a multi-vector is, with the exception of a 1-vector, not defined. Further, a multi-vector's orientation,... that is, it's orientation in the (n+p)-spaces that an nD-multivector is embedded in is also not defined.

The 'orientation' referred to in regards to multi-vectors (I think!) is in reference to the 'centroid' of the multi-vector and describes some directiveness in a possible 'coating' of the nD-multivector with a (n-1)-D-multivector.

In any case, depending on the shape of the multi-vector, it seems that a normal to the multi-vector at one point may not always be a normal at another point. For instance, if the area enclosed by a bivector is seen to be a circular area, then a normal 1-vector to the bivector at one point on it's surface isn't going to be normal 1-vector to the bivector at another point on it's surface... at least in reference to the (2+p)-spaces that the bivector is embedded in.

Of course, all of which I just said presumably only applies to a multi-vector which encloses a finite hypervolume. If the multi-vector completely spans its n-space, then I wouldn't think these concerns would apply.
Paul
Trionian
 
Posts: 74
Joined: Sat Sep 04, 2004 10:56 pm

Postby Paul » Thu Nov 11, 2004 6:31 pm

Hello again Pat,

I think I said some dumb things...

The shape of a nD-multivector in n-space is irrelevant since an nD-multivector has no extent in (n+1)-space... correct?

However, I'm still thinking that we need to know where the position of the nD-multivector's normal is in relation to (n+1)-space... in order to determine things like whether or not a particular vector points to a point within the nD-multivector's subspace... is this correct? Or, am I making another goof...?

If it's correct that the multivector's value only tells us the quantity of hypervolume enclosed by it, then I'd think we'd need a position vector in (n+1)-space to tell us where the nD-multivector is in (n+1)-space...?
Paul
Trionian
 
Posts: 74
Joined: Sat Sep 04, 2004 10:56 pm

Postby pat » Thu Nov 11, 2004 7:19 pm

Paul wrote:There's one thing that concerns me here about the wedge product...

If I recall correctly, the wedge product of two multi-vectors, A<sub>r</sub> and B<sub>s</sub>, of dimension r and s respectively, produces a multi-vector C<sub>r+s</sub> of dimension r+s.


Hmmm.... I think we've got a mismatch on what we mean by multivector and what we mean by dimension. From my understanding of the terminology, there are vectors (1-vectors), bivectors (2-vectors), trivectors (3-vectors), 4-vectors, 5-vectors, etc. up to d-vectors if the vectors are d-dimensional. So, to me, you start with d-dimensional vectors. From there, you take the wedge product of two 1-vectors and you get a 2-vector. You take the wedge product of a 2-vector and a 1-vector to get a 3-vector. So, if A<sub>r</sub> is an r-vector and B<sub>s</sub> is an s-vector, then the wedge product of the two is an (r+s)-vector (assuming that the vectors are of dimension at least r+s).

But, from my understanding, it's not possible to take the wedge product of an r-dimensional 1-vector and an s-dimensional 1-vector unless r = s. And, if you take the wedge product of two r-dimensional 1-vectors, then your result is a 2-vector. 2-vectors (in a space where the 1-vectors are r-dimensional) will be (r choose 2)-dimensional.... 3-vectors from that same space will be (r choose 3)-dimensional.... etc.

To me, a multivector is a sum of a scalar, a 1-vector, a 2-vector, a 3-vector, ..., and a d-vector (assuming the 1-vectors are d-dimensional).

Let's go with an example with 2-dimensional 1-vectors. So, our 1-vectors are of the form: [ x, y ]<sub>1</sub>. We can write this in terms of basis vectors like this: xe<sub>1</sub> + ye<sub>2</sub>.

Now, let's take the wedge product: [ x, y ] ∧ [ a, b ].
  • ( xe<sub>1</sub> + ye<sub>2</sub> ) ∧ ( ae<sub>1</sub> + be<sub>2</sub> )
  • xa( e<sub>1</sub> ∧ e<sub>1</sub> ) + xb( e<sub>1</sub> ∧ e<sub>2</sub> ) + ya( e<sub>2</sub> ∧ e<sub>1</sub> ) + yb( e<sub>2</sub> ∧ e<sub>2</sub> )
  • xa*0 + xb( e<sub>1</sub> ∧ e<sub>2</sub> ) + ya( e<sub>2</sub> ∧ e<sub>1</sub> ) + yb*0
  • xb( e<sub>1</sub> ∧ e<sub>2</sub> ) - ya( e<sub>1</sub> ∧ e<sub>2</sub> )
  • ( xb - ya )( e<sub>12</sub> )

You'll note that the coefficient of the result is exactly the length of the vector you'd get if you had taken the cross-product of these vectors. That's not really a coincidence.... but it is somewhat an artifact of the fact that we're working in 2-d... (technically, if we're going to talk about the cross-product, then really we're working in 3-d... we just used vectors [ x, y, 0 ] and [ a, b, 0 ].

If we're going to be concerned with a hyperplane.... that is to say... a (d-1)-dimensional, infinite, flat slice of d-dimensional space, we could do it in a couple of ways. First, we have to decide which hyperplane we're interested in. To do this, we can pick any d points in d-space. If we pick them arbitrarily enough (such that no three of them lie on the same line), then we will get one and only one (d-1)-plane running through them. Let our points be: x<sub>1</sub>, x<sub>2</sub>, ..., x<sub>d</sub>.

We can define our plane parametrically as:
( x<sub>1</sub> - x<sub>d</sub> ) * c<sub>1</sub> + ( x<sub>2</sub> - x<sub>d</sub> ) * c<sub>2</sub> + ... + ( x<sub>d-1</sub> - x<sub>d</sub> ) * c<sub>d-1</sub>. Then, as the c<sub>j</sub> range over all values from -infinity to +infinity, we get the plane. This sort of thing is useable if we're interested in questions like... at what point does a particular line intersect this hyperplane. Then, we can set up a system of equations in the unknown c<sub>j</sub>'s and solve for them. Then, we can plug those back into our plane equation to find out what point that corresponds to. But, this is a bit unwieldy since there are so many unknowns.

Another way that we can define our plane is by finding a vector that is normal to all of the vectors in our plane. That is, finding an 'n' such that ( x<sub>j</sub> - x<sub>k</sub> ) . n = 0 for all j,k. If we find such an n, then our plane is all points z such that ( z - x<sub>d</sub> ) . n = 0. This is convenient because we can quickly and easily subtract two vectors and compute the dot-product. And, actually, we don't even have to subtract the two vectors since, as I mentioned above in the inner-product ramble, ( z - x<sub>d</sub> ) . n = ( z.n ) - ( x<sub>d</sub>.n )... and we can precompute ( x<sub>d</sub>.n ). Let's call that value 'c'. Now, we can write our plane as all points z such that: z.n = c.

This is very convenient. We can easily check to see if two points z<sub>1</sub> and z<sub>2</sub> are on the same side of the plane by comparing the signs of ( z<sub>1</sub>.n - c ) and ( z<sub>2</sub>.n - c ). It's also very convenient if we parametrically define a line like this: y<sub>0</sub> + ( y<sub>1</sub> - y<sub>0</sub> ) t. Now, we can set up an equation to see where that line intersects the plane. But, this time, there's only one parameter (instead of the d parameters there were before).
  • [ y<sub>0</sub> + ( y<sub>1</sub> - y<sub>0</sub> ) t ] .n = c
  • ( y<sub>0</sub>.n ) + [ ( y<sub>1</sub> - y<sub>0</sub> ).n ] t = c
  • t = <sup>[ c - ( y<sub>0</sub>.n ) ]</sup>/<sub>[ ( y<sub>1</sub> - y<sub>0</sub> ).n ]</sub>


Now, the only tricky part is trying to find this 'n'. It turns out that it's easy to do using the Hodge dual... as I described in the other thread that I linked to in an earlier post on this thread.
pat
Tetronian
 
Posts: 563
Joined: Tue Dec 02, 2003 5:30 pm
Location: Minneapolis, MN

Postby pat » Thu Nov 11, 2004 7:30 pm

Paul wrote:The shape of a nD-multivector in n-space is irrelevant since an nD-multivector has no extent in (n+1)-space... correct?


I know at the time you wrote this, you hadn't seen my latest reply in this thread. But, I'm going to try to stay succinct and on-track in this reply. :)

I don't know what you mean by an nD-multivector. I think that you mean a k-vector in a space where the 1-vectors are n-dimensional. I think it'd be clearer to me if you said k-vector instead of multivector. Because, technically, this is a multivector (from a space where the 1-vectors are 2-dimensional): 1 + 2e<sub>1</sub> + 3e<sub>2</sub> + 4e<sub>12</sub>. But, that doesn't represent the area of anything, really. The "4e<sub>12</sub>" part represents the area swept out by two 1-vectors (but not the two that also appear in that multivector).

So, a k-vector represents the k-dimensional volume swept out by k 1-vectors in d-dimensional space (where d is greater or equal to k). But, a multivector in that same space probably encodes more than that... it's got 1-vectors and 2-vectors and 3-vectors and k-vectors and scalars all together.
pat
Tetronian
 
Posts: 563
Joined: Tue Dec 02, 2003 5:30 pm
Location: Minneapolis, MN

Postby pat » Thu Nov 11, 2004 7:54 pm

Paul wrote:However, I'm still thinking that we need to know where the position of the nD-multivector's normal is in relation to (n+1)-space... in order to determine things like whether or not a particular vector points to a point within the nD-multivector's subspace... is this correct? Or, am I making another goof...?


Changing terminology to what I think you mean... if we have a (d-1)-vector in a space where the 1-vectors are d-dimensional, it defines an (d-1)-dimensional volume swept out by d-1 vectors. Suppose we had those vectors: v<sub>1</sub>, v<sub>2</sub>, v<sub>3</sub>, ..., v<sub>d-1</sub>. Can we find a vector n that is normal to all of them? Can we find a vector n such that v<sub>j</sub>.n = 0 for j on the range 1 through d-1?

Here's where the wedge product comes in. Take the wedge product of all of those vectors: v<sub>1</sub> ∧ v<sub>2</sub> ∧ v<sub>3</sub> ∧ ... ∧ v<sub>d-1</sub>. Let's call this n<sup>*</sup>. Now, because of the way the wedge product works, any time we take one of our v<sub>j</sub> and wedge it with n<sup>*</sup>, we get zero. And, if we have some linear combination of our v<sub>j</sub>: V = ∑ a<sub>j</sub>v<sub>j</sub> (this is just like our parametric equation for a plane above where v<sub>j</sub> = ( x<sub>j</sub> - x<sub>d</sub> ) ), and we want to wedge that with n<sup>*</sup>, the wedge distributes so: V ∧ n<sup>*</sup> = ∑ a<sub>j</sub> ( v<sub>j</sub> ∧ n<sup>*</sup> ) = 0.

This n<sup>*</sup> is the Hodge dual of the 1-vector n that we're looking for. v.n is precisely the coefficient of the d-vector result of: v ∧ n<sup>*</sup>.

Now, if we know n<sup>*</sup> and we want to find n, we can just go through each unit vector e<sub>j</sub> and calculate: e<sub>j</sub> ∧ n<sup>*</sup>. This will tell us the coefficient of e<sub>j</sub> in the vector n (except we might have the wrong sign). How might we have the wrong sign? Well, take for example, e<sub>2</sub> ∧ e<sub>13</sub>. Since the wedge product is giving us the coefficient of e<sub>123</sub> then we get a sign change because: e<sub>2</sub> ∧ e<sub>13</sub> = e<sub>213</sub> = - e<sub>123</sub>. Fortunately, this is an even-odd thing. The coefficient for e<sub>j</sub> in n is then: (-1)<sup>j+1</sup>( e<sub>j</sub> ∧ n<sup>*</sup> ).
pat
Tetronian
 
Posts: 563
Joined: Tue Dec 02, 2003 5:30 pm
Location: Minneapolis, MN

Postby Paul » Thu Nov 11, 2004 9:21 pm

Hello Pat,

Much of what I say probably still doesn't make much sense, but I believe I was thinking of what the website linked below calls a 'blade',... which is not, if I understand correctly a general multivector.

http://www.iancgbell.clara.net/maths/geoalg1.htm#A4

It also sounds like I might have been thinking of what this website calls pure multivector in regards to

If I recall correctly, the wedge product of two multi-vectors, A<sub>r</sub> and B<sub>s</sub>, of dimension r and s respectively, produces a multi-vector C<sub>r+s</sub> of dimension r+s.


Sorry to be so confused... :oops:

Anyway... in order to compute the wedge product don't we break each blade (or, multivector maybe? ... that'll probably be more confusing?) into it's component 1-vectors... for that n-space? (Yes... I guess I was also confused about particular k-vectors belonging to n-space, k <= n)

Then... do a bunch of stuff that seems pretty confusing... :?:
Paul
Trionian
 
Posts: 74
Joined: Sat Sep 04, 2004 10:56 pm

Postby pat » Fri Nov 12, 2004 1:18 am

Paul wrote:Much of what I say probably still doesn't make much sense, but I believe I was thinking of what the website linked below calls a 'blade',... which is not, if I understand correctly a general multivector.


Okay.... I see now... but, their mutlivectors and my multivectors are the same. And, when I wrote things like [ a, b, c, ]<sub>2</sub>, I meant that a, b, and c were the coefficients of the blades: e<sub>12</sub>, e<sub>13</sub> and e<sub>23</sub> in my multivector.

Further, a k-vector is just a multivector where the coefficient of a blade has to be zero unless there were exactly k basis vectors going into that blade.

And, yes, when I say "k-vector" I'm refering to what they call a "pure mutlivector" or a "k-pure multivector". I don't like the "impure k-vector" term. I'm not sure when one would ever want to use that. Hmmm....

And, yes, to do wedge products of blades, we do effectively break down the blades into their components vectors. But, it's not actually all that confusing. We can do most of it with our notation. For example:

  • e<sub>2</sub> ∧ e<sub>13</sub>
  • = e<sub>2</sub> ∧ ( e<sub>1</sub> ∧ e<sub>3</sub>)
  • = ( e<sub>2</sub> ∧ e<sub>1</sub> ) ∧ e<sub>3</sub>
  • = e<sub>2</sub> ∧ e<sub>1</sub> ∧ e<sub>3</sub>
  • = e<sub>213</sub>


And, if we used the identity e<sub>2</sub>∧e<sub>1</sub> = - e<sub>1</sub>∧e<sub>2</sub> in the middle step, then when we got down to the end, we'd have - e<sub>123</sub>. Pretty much, to do the wedge product of two blades, we can concatenate the subscripts. Then, we can rearrange them to get them sorted. But, the caveat is that we can only swap adjacent subscripts at each step. And, each swap makes for a change in sign. And, furthermore, if you end up with identical subscripts, the answer is zero.

So... with the subscript manipulation, we can see that (and I'm in a hurry, so I'm just going to write the subscripts without all of the e's):

135 ∧ 24678 = 13524678 = -13254678 = 12354678 = -12345678

So, we can easily wedge together a 3-blade and a 5-blade to get an 8-blade.

Now, the only other things are to know that the wedge distributes across addition. So: u ∧ ( v + w ) = ( u ∧ v ) + ( u ∧ w ) and if a and b are constants and u and v are blades, then ( au ) ∧ ( bv ) = (ab)( u ∧ v ).

But, actually, I suppose I'm taking a big shortcut that the page you referenced doesn't. I'm assuming that e<sub>j</sub> ∧ e<sub>k</sub> = 1 * e<sub>jk</sub> for j not equal to k. If you're not using an orthonormal basis, then this could get funky... and you won't be able to just manipulate the subscripts.
pat
Tetronian
 
Posts: 563
Joined: Tue Dec 02, 2003 5:30 pm
Location: Minneapolis, MN


Return to Other Geometry

Who is online

Users browsing this forum: No registered users and 16 guests

cron