quickfur wrote:I still don't understand your example. It's like saying that since 3x > 2x, therefore 3x > 2x + 2x. I don't understand what you were trying to do.
Whats not understandable with
x<sup>r</sup> > k x<sup>s</sup> or
E*r > k*E*s for r>s
for r > s reals, and arbitrary real k?
If you for example regard the functions built recursively by starting with id and then for each gotten f,g taking also f<sup>g</sup>. Then it is indeed valid that
id<sup>f<sub>1</sub>...f<sub>m</sub></sup> > id<sup>g<sub>1</sub>...g<sub>n</sub></sup>
if the maximum of the f<sub>i</sub> is greater than the maximum of the g<sub>i</sub> (in Hardy order), especially if f > g then is
id<sup>f</sup> > id<sup>g<sup>n</sup></sup> for each n.
If we however also allow inversion functions, i.e. building recursively the set of functions starting with id, and for each gotten f,g taking also f<sup>g</sup> and fog and f<sup>-1</sup>, then this rule is no more valid.
But this is what I understand by lexicographic order:
If we have a sum (or product or set or multiset)
f<sub>1</sub>+...+f<sub>m</sub> and g<sub>1</sub>+...+g<sub>n</sub>
Then we start with comparing the highest elements of both sides, if they are equal we throw them away and so we continue, until the highest element of one side is lesser/greater than the other, or until on one side are no more elements (if on both sides there are no more elements, then both sides were equal in the beginning).
This is standard lexicographic order on sets or multisets. And this order is true for polynomials (and effective with natural numbered coeffecients) for example x<sup>2</sup> > x + x + x
Hmm. But even if you introduce a complementation to magnitude numbers such that mag(-f) = complement(mag(f)), you still have a problem: mag(f + (-f)) -> max(mag(f), mag(-f)) = mag(f) which is false because f + (-f) = 0.
Also, I think no matter what you do, using O-comparison will always give you this trouble: mag(x<sup>2</sup> + 2x) = 2 = mag(-x<sup>2</sup>), but mag((x<sup>2</sup>+2x) + (-x<sup>2</sup>)) = 1, whereas mag(x<sup>2</sup> + (-x<sup>2</sup>)) = 0. So for two functions f and g, (f+g) can have any magnitude less than or equal to max(f,g) because the + hides those extra terms behind the O-equivalence.
I thought instead of the faulty max rule, we agreed upon representing the function addition by ln(e<sup>f</sup> e<sup>g</sup>) ?
It seems that it depends on the "breadth" ....
"Breadth"?
Yes, if p has more monomials than q, it seems that then q*p > p*q. Any counterexamples?
No, I was just wondering what you meant by "breadth".
Yes, and I answered that breadth could mean the number of monomials in a polynomial. In opposite of the degree which I would call depth.
In general, M*N > N*M if M is super-polynomial, and N*M > M*N if M is logarithmic. I'm not sure what the behaviour is when M is polynomial.
But for what N? I guess N polynomial.
It seems to also work for non-polynomial N, except that in some cases you get equality instead of strict inequality. But I may have missed some cases.
Yes, but sureley not N > M?
e<sup>x</sup> seems to be the "balance" between faster and slower derivatives. Anything growing slower than e<sup>x</sup> has a derivative that grows slower than itself; anything growing faster than e<sup>x</sup> has a derivative that grows faster than itself.
Hm.