It's occasionally stated on this site that a system of quadratic equations is as difficult to solve as a high-degree polynomial equation (or system of equations). Here's another aspect of that.
Given a linear transformation L on an n-dimensional space, we define eigenvectors and eigenvalues by the equation
L(x) = λx,
where x≠0 is a vector and λ is a scalar. As a vector equation, it has n components (i.e. it can be written as a system of n scalar equations). It is equivalent to
(L - λI)x = 0,
where I is the identity transformation. Then we usually eliminate x by saying that the transformation L-λI is not invertible:
det(L - λI) = 0.
This is a scalar equation, an n'th degree polynomial in λ. After solving for the eigenvalue λ, we can easily solve the linear system of equations (L-λI)x = 0 to get the eigenvector x.
But it's possible to eliminate λ first instead of x.
Two vectors x and y can be multiplied with the wedge product. x∧y is a bivector, representing the plane spanned by the vectors, unless the bivector is 0, in which case both vectors are on one line. Given that x ≠ 0 (so it represents a unique line), the equation x∧y = 0 says that y is on the same line as x. But the equation y = λx (for some scalar λ) says exactly the same thing: y is on the same line as x. (Note, in both cases y=0 is allowed.)
So the eigenvector equation L(x) = λx is equivalent to
x∧L(x) = 0.
As a bivector equation, it has n(n-1)/2 components (i.e. it can be written as a system of n(n-1)/2 scalar equations). It's quadratic in x. After solving for the eigenvector x, we can easily get the eigenvalue λ by comparing L(x) to x. (E.g. take one of the non-zero components of x, and divide it out from the corresponding component of L(x).)