tetration and higher operations

Other scientific, philosophical, mathematical etc. topics go here.

Re: tetration and higher operations

Postby bo198214 » Tue Nov 28, 2006 7:50 pm

quickfur wrote:I still don't understand your example. It's like saying that since 3x > 2x, therefore 3x > 2x + 2x. I don't understand what you were trying to do.

Whats not understandable with
x<sup>r</sup> > k x<sup>s</sup> or
E*r > k*E*s for r>s
for r > s reals, and arbitrary real k?

If you for example regard the functions built recursively by starting with id and then for each gotten f,g taking also f<sup>g</sup>. Then it is indeed valid that
id<sup>f<sub>1</sub>...f<sub>m</sub></sup> > id<sup>g<sub>1</sub>...g<sub>n</sub></sup>
if the maximum of the f<sub>i</sub> is greater than the maximum of the g<sub>i</sub> (in Hardy order), especially if f > g then is
id<sup>f</sup> > id<sup>g<sup>n</sup></sup> for each n.

If we however also allow inversion functions, i.e. building recursively the set of functions starting with id, and for each gotten f,g taking also f<sup>g</sup> and fog and f<sup>-1</sup>, then this rule is no more valid.

But this is what I understand by lexicographic order:
If we have a sum (or product or set or multiset)
f<sub>1</sub>+...+f<sub>m</sub> and g<sub>1</sub>+...+g<sub>n</sub>
Then we start with comparing the highest elements of both sides, if they are equal we throw them away and so we continue, until the highest element of one side is lesser/greater than the other, or until on one side are no more elements (if on both sides there are no more elements, then both sides were equal in the beginning).

This is standard lexicographic order on sets or multisets. And this order is true for polynomials (and effective with natural numbered coeffecients) for example x<sup>2</sup> > x + x + x

Hmm. But even if you introduce a complementation to magnitude numbers such that mag(-f) = complement(mag(f)), you still have a problem: mag(f + (-f)) -> max(mag(f), mag(-f)) = mag(f) which is false because f + (-f) = 0.

Also, I think no matter what you do, using O-comparison will always give you this trouble: mag(x<sup>2</sup> + 2x) = 2 = mag(-x<sup>2</sup>), but mag((x<sup>2</sup>+2x) + (-x<sup>2</sup>)) = 1, whereas mag(x<sup>2</sup> + (-x<sup>2</sup>)) = 0. So for two functions f and g, (f+g) can have any magnitude less than or equal to max(f,g) because the + hides those extra terms behind the O-equivalence.

I thought instead of the faulty max rule, we agreed upon representing the function addition by ln(e<sup>f</sup> e<sup>g</sup>) ?

It seems that it depends on the "breadth" ....

"Breadth"?

Yes, if p has more monomials than q, it seems that then q*p > p*q. Any counterexamples?

No, I was just wondering what you meant by "breadth".

Yes, and I answered that breadth could mean the number of monomials in a polynomial. In opposite of the degree which I would call depth.

In general, M*N > N*M if M is super-polynomial, and N*M > M*N if M is logarithmic. I'm not sure what the behaviour is when M is polynomial.

But for what N? I guess N polynomial.

It seems to also work for non-polynomial N, except that in some cases you get equality instead of strict inequality. But I may have missed some cases.

Yes, but sureley not N > M?

e<sup>x</sup> seems to be the "balance" between faster and slower derivatives. Anything growing slower than e<sup>x</sup> has a derivative that grows slower than itself; anything growing faster than e<sup>x</sup> has a derivative that grows faster than itself.


Hm.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Wed Nov 29, 2006 2:53 am

bo198214 wrote:
quickfur wrote:I still don't understand your example. It's like saying that since 3x > 2x, therefore 3x > 2x + 2x. I don't understand what you were trying to do.

Whats not understandable with
x<sup>r</sup> > k x<sup>s</sup> or
E*r > k*E*s for r>s
for r > s reals, and arbitrary real k?

That's what I got so far.

If you for example regard the functions built recursively by starting with id and then for each gotten f,g taking also f<sup>g</sup>. Then it is indeed valid that
id<sup>f<sub>1</sub>...f<sub>m</sub></sup> > id<sup>g<sub>1</sub>...g<sub>n</sub></sup>
if the maximum of the f<sub>i</sub> is greater than the maximum of the g<sub>i</sub> (in Hardy order), especially if f > g then is
id<sup>f</sup> > id<sup>g<sup>n</sup></sup> for each n.

If we however also allow inversion functions, i.e. building recursively the set of functions starting with id, and for each gotten f,g taking also f<sup>g</sup> and fog and f<sup>-1</sup>, then this rule is no more valid.

Oh really?

But this is what I understand by lexicographic order:
If we have a sum (or product or set or multiset)
f<sub>1</sub>+...+f<sub>m</sub> and g<sub>1</sub>+...+g<sub>n</sub>
Then we start with comparing the highest elements of both sides, if they are equal we throw them away and so we continue, until the highest element of one side is lesser/greater than the other, or until on one side are no more elements (if on both sides there are no more elements, then both sides were equal in the beginning).

Yes, but then I don't understand the part in your original example where you went from f > E*L*1 to f > E*L*1 + E*L*1. If, suppose, f ~ 1.5*E*L*1, then the derivation "f > E*L*1 implies f > E*L*1 + E*L*1" is invalid.

Or did I misread something?

[...]
Hmm. But even if you introduce a complementation to magnitude numbers such that mag(-f) = complement(mag(f)), you still have a problem: mag(f + (-f)) -> max(mag(f), mag(-f)) = mag(f) which is false because f + (-f) = 0.

Also, I think no matter what you do, using O-comparison will always give you this trouble: mag(x<sup>2</sup> + 2x) = 2 = mag(-x<sup>2</sup>), but mag((x<sup>2</sup>+2x) + (-x<sup>2</sup>)) = 1, whereas mag(x<sup>2</sup> + (-x<sup>2</sup>)) = 0. So for two functions f and g, (f+g) can have any magnitude less than or equal to max(f,g) because the + hides those extra terms behind the O-equivalence.

I thought instead of the faulty max rule, we agreed upon representing the function addition by ln(e<sup>f</sup> e<sup>g</sup>) ?

We did?

It seems that it depends on the "breadth" ....

"Breadth"?

Yes, if p has more monomials than q, it seems that then q*p > p*q. Any counterexamples?

No, I was just wondering what you meant by "breadth".

Yes, and I answered that breadth could mean the number of monomials in a polynomial. In opposite of the degree which I would call depth.

Ah, yes, if you consider them as expression trees.

In general, M*N > N*M if M is super-polynomial, and N*M > M*N if M is logarithmic. I'm not sure what the behaviour is when M is polynomial.

But for what N? I guess N polynomial.

It seems to also work for non-polynomial N, except that in some cases you get equality instead of strict inequality. But I may have missed some cases.

Yes, but sureley not N > M?

Hmm, you're right. Maybe it only works for M > N (I didn't actually work out all the cases to check :oops:).
quickfur
Pentonian
 
Posts: 2935
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Wed Nov 29, 2006 11:31 am

quickfur wrote:
If you for example regard the functions built recursively by starting with id and then for each gotten f,g taking also f<sup>g</sup>. Then it is indeed valid that
id<sup>f<sub>1</sub>...f<sub>m</sub></sup> > id<sup>g<sub>1</sub>...g<sub>n</sub></sup>
if the maximum of the f<sub>i</sub> is greater than the maximum of the g<sub>i</sub> (in Hardy order), especially if f > g then is
id<sup>f</sup> > id<sup>g<sup>n</sup></sup> for each n.

If we however also allow inversion functions, i.e. building recursively the set of functions starting with id, and for each gotten f,g taking also f<sup>g</sup> and fog and f<sup>-1</sup>, then this rule is no more valid.

Oh really?

Yes, thatswhy I insist so much on the lexicographic principle, when it is applicable and when not.

Yes, but then I don't understand the part in your original example where you went from f > E*L*1 to f > E*L*1 + E*L*1. If, suppose, f ~ 1.5*E*L*1, then the derivation "f > E*L*1 implies f > E*L*1 + E*L*1" is invalid.

The rule I applied was F > G thatswhy E*F > k*E*G. It is valid for constant F,G (and for superconstants, i.e. terms only containing +,*, E and constants). I used not F>G thatswhy F>k*G. Though it looks like that if F>G then L*F > L*G and hence E*L*F > k*E*L*G and hence F > k*G. But we know that for constants r>s, L*r ~ L*s. I think anyway that we can close this subtopic now.

I thought instead of the faulty max rule, we agreed upon representing the function addition by ln(e<sup>f</sup> e<sup>g</sup>) ?

We did?

It was your turn to answer and not to ask! Or is it the polite form of "we didnt agree"?

In general, M*N > N*M if M is super-polynomial, and N*M > M*N if M is logarithmic. I'm not sure what the behaviour is when M is polynomial.

But for what N? I guess N polynomial.

It seems to also work for non-polynomial N, except that in some cases you get equality instead of strict inequality. But I may have missed some cases.

Yes, but sureley not N > M?

Hmm, you're right. Maybe it only works for M > N (I didn't actually work out all the cases to check :oops:).

It is a symmetric assertion: If you say for superpolynomial M, is M*N > N*M, then you also say for superpolynomial N, is N*M > M*N. Which is a contradiction if you dont restrict N to something other than superpolynomials. If you would allow equality then (without restriction of N) you would get something like if M and N are superpolynomials then M*N ~ N*M which is probably also false.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Fri Dec 01, 2006 5:03 pm

bo198214 wrote:
quickfur wrote:
If you for example regard the functions built recursively by starting with id and then for each gotten f,g taking also f<sup>g</sup>. Then it is indeed valid that
id<sup>f<sub>1</sub>...f<sub>m</sub></sup> > id<sup>g<sub>1</sub>...g<sub>n</sub></sup>
if the maximum of the f<sub>i</sub> is greater than the maximum of the g<sub>i</sub> (in Hardy order), especially if f > g then is
id<sup>f</sup> > id<sup>g<sup>n</sup></sup> for each n.

If we however also allow inversion functions, i.e. building recursively the set of functions starting with id, and for each gotten f,g taking also f<sup>g</sup> and fog and f<sup>-1</sup>, then this rule is no more valid.

Oh really?

Yes, thatswhy I insist so much on the lexicographic principle, when it is applicable and when not.

Hmm. I'm not sure how we got to function inverses, maybe it was something I said without qualification, but in my consideration, I have always only considered inverses expressible algebraically. I thought most of the rules should generalize to other inverses as well, but apparently I was wrong. In any case, I still don't understand what's the problem with the lexicographic comparison, unless your whole point was that allowing inverses will break it, in which case I apologize for completely missing your point.

[...]
I thought instead of the faulty max rule, we agreed upon representing the function addition by ln(e<sup>f</sup> e<sup>g</sup>) ?

We did?

It was your turn to answer and not to ask! Or is it the polite form of "we didnt agree"?

All I remembered was that you suggested to represent addition by rephrasing it as a logarithmic composition. I'm not aware that we actually agreed on it. I'm not saying that I disagree, just that I thought it was only your suggestion, and I hadn't really thought about it that much since.

In any case, I think you're right that trying to use O-equivalence is leading to too many complicated rules and special cases. Maybe we should start over with Hardy's definition and see how far we can get.

[...]It is a symmetric assertion: If you say for superpolynomial M, is M*N > N*M, then you also say for superpolynomial N, is N*M > M*N. Which is a contradiction if you dont restrict N to something other than superpolynomials. If you would allow equality then (without restriction of N) you would get something like if M and N are superpolynomials then M*N ~ N*M which is probably also false.

OK, I wasn't thinking when I wrote what I did. :) You're right, if N > M, then you can't possibly say that M*N > N*M just because M is superpolynomial. So maybe that should be rephrased as, if M >= N, then M*N >= N*M if M is superpolynomial. (Is the superpolynomial requirement necessary here? It seems to also work for some polynomial/logarithmic cases.)
quickfur
Pentonian
 
Posts: 2935
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Sat Dec 02, 2006 10:16 pm

quickfur wrote:Hmm. I'm not sure how we got to function inverses

Because me.

I still don't understand what's the problem with the lexicographic comparison,

Perhaps you simply restate it officially and then I agree with you ;) It was merely I slight interpretation question as I see it.

I'm not saying that I disagree, just that I thought it was only your suggestion, and I hadn't really thought about it that much since.

So do you agree then?
From my perspective it is just more practicable, to avoid the faulty max-rule and still being able to express function addition in the magnitude numbers. Though I dont know whether you can come to the same results without the max rule (for example for polynomials).

In any case, I think you're right that trying to use O-equivalence is leading to too many complicated rules and special cases. Maybe we should start over with Hardy's definition and see how far we can get.


I just set up a forum with LaTeX support! I would love to having you started there the first thread ... :) I mean our thread is anyway quite non-tetraspacy, but my forum is explicitely dedicated to unusual mathematics. I also invite houserichichi and everyone interested ([blink]advertising[/blink]).
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby PWrong » Sat Dec 16, 2006 3:12 pm

This isn't exactly a measure of how fast functions grow, but I've found a kind of measure of the "height" of a function as it looks when you write it down.

The idea is you repeatedly take the log, then the derivative of the function, until you get a function whose limit is zero as x -> infinity. That is, you work out d/dx ln[f(x)], or f'(x)/f(x), whichever's easier. The number of time you do this is the order of the function. For example:

f(x) = x^2
d/dx ln[x^2] = f'(x)/f(x) = 2x/x^2 = 2/x
This tends to zero, so the order is 1.

f(x) = e^(e^x)
d/dx ln[e^(e^x)] = e^x
d/dx ln[e^x] = 1
d/dx ln[1] = 0
The zero function tends to zero, so the order is 3

Functions like 1/x have order 0, while any polynomial has order 1. Any rational function has order either 0 or 1. e^x has order 2.
e^(e^(...^x)) with n e's has order n+1.

e^(x^2) and x^x both have order 2.

suppose you write exponents in the following way:

p(x)^q(x) is written as p(x)^q if q(x) is just a constant, but p(x)<sup>q(x)</sup> iff q(x) is an actual function.

Then if the overall function is monotonic, then my conjecture is that the order of the function is equal to the "height" of the tower.
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Re:

Postby quickfur » Fri Aug 15, 2008 11:44 pm

PWrong wrote:This isn't exactly a measure of how fast functions grow, but I've found a kind of measure of the "height" of a function as it looks when you write it down.

The idea is you repeatedly take the log, then the derivative of the function, until you get a function whose limit is zero as x -> infinity. That is, you work out d/dx ln[f(x)], or f'(x)/f(x), whichever's easier. The number of time you do this is the order of the function. [...]

I'm not sure if this works for all functions... what about tetrational functions (i.e., f(x+1) = b^f(x))? Can you guarantee that repeatedly taking the derivative and log will eventually reduce it to a diminishing function? Note that functions that grow faster than e^x will in general have a derivative that grows faster than itself. So if the function's derivative is greater than e^f(x), then repeatedly taking derivatives and logs will never yield a function that shrinks to 0.
quickfur
Pentonian
 
Posts: 2935
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Previous

Return to General

Who is online

Users browsing this forum: No registered users and 1 guest