tetration and higher operations

Other scientific, philosophical, mathematical etc. topics go here.

tetration and higher operations

Postby bo198214 » Wed Nov 15, 2006 7:48 pm

This is in continuation of the thread The exponential power series. It should also be mentioned that there exists already the thread Tetration and other large number sequences.

quickfur wrote:Now, this is an area that I'm deeply interested in.

Hehey, we are beating the 3, regardless whether as number of dimensions or as number of operations!

For a long time, I've been interested in generalizing the exponentiation function to tetration
.
Ok, here I should add some notes.

1. Recently Galidakis showed that there is a continuous (even arbitrary times differentiable) extension of the higher operations defined by the original Ackermann function. The problem with the extension is that it is quite arbitrary, a uniqueness criterion is lacking.

2. I dont see a reasonable connection, regarding a continuous extension, between the extension of x^(x^...) and the extension of e^(e^....x). The first one has nearly no clues how to define for example half a tetration, while the latter has a clue, i.e. f(f(x)) = e<sup>x</sup>.

3. If we would of course define f(x):=x<sup>x</sup> then we could try to find the analytic iterates (as introduced in my previous post), because f is analytic and has a fixed point at 1. Look at:
f(f(x)) = (x<sup>x</sup>)<sup>x<sup>x</sup></sup> = x<sup>x<sup>x</sup>x</sup>
f(f(f(x))) = x<sup>x<sup>x<sup>x</sup>x</sup></sup><sup>x<sup>x</sup>x</sup>
We see that the tower is n+1 high, for n iterations of f. Iteration of f can also be done by iteration of g(x):=xe<sup>x</sup> (which is the inversion of the famous Lambert W function), if we notice that g = ln o f o exp and thatswhy g<sup>n</sup> = ln o f<sup>n</sup> o exp. g has even the desired g(0)=0 standard fixed point. The analytic iterate should be unique, though I didnt elabortate yet whether the previously given analytic iteration formula converges for g at 0. If not then at least for each iterate is a unique analytic function defined on x>0 that approximates the analytic iteration power series in 0.
The iteration of f was indeed regarded already by Frappier in "Iteration of a kind of exponentials", 1990, though it seems he didnt show neither uniqueness nor analyticity of the iterates. So this work shall be done by the one or another.

4. e<sup>x</sup> has a family of real analytic iterates (as showed by Kneser already 1949), unfortunately these are not unique either, because the development is around a complex fixed point. The quite reputable mathematician Szekeres devoted much time to finding a "best" iterate, but I would say still unsuccessfully (see his "Abel's equations and regular growth ....", 1998 (and notice that he was born 1912!)).

Why havent we such uniqueness problems for the lower operations multiplication, and exponentiation?

The short answer is because (nm)x = n(mx) and x<sup>nm</sup> = (x<sup>n</sup>)<sup>m</sup>. Because by this law we have a clue how to define (1/n)x or x<sup>1/n</sup>. This law should then be also valid on the extension the rational numbers: x=x<sup>(1/n)n</sup> = (x<sup>1/n</sup>)<sup>n</sup> and hence it is clear that x<sup>1/n</sup> must be the inversion function of x<sup>n</sup>. Similarly for the multiplication, though not so interesting.

Unfortunately this law is no more valid for tetration, i.e. usually x^^(nm) != (x^^n)^^m, because the parenthesizing is different (which matters for x<sup>y</sup> in contrary to multiplication and addition, which are associative). To come out of this dilemma I invented a new number domain "arborescent numbers" in which this law is valid for all higher operations. And I am frolicking that one time all the higher operations have a unique continous extension there. The current state of my research can be found here.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby PWrong » Thu Nov 16, 2006 1:11 pm

The current state of my research can be found here.

Amazing. I think I can follow it, but I'll have to spend a few weeks studying it. That should keep me busy for the holidays.

To come out of this dilemma I invented a new number domain "arborescent numbers" in which this law is valid for all higher operations.

Have you actually found this number domain, or just postulated its existence and looked at its properties? i.e. how do you represent these numbers? It's a great idea either way.
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby bo198214 » Thu Nov 16, 2006 6:44 pm

PWrong wrote:Amazing.

Thanks :) *Welcomes any criticism*

Have you actually found this number domain, or just postulated its existence and looked at its properties? i.e. how do you represent these numbers?

I construct (additively non-associative) counterparts of the natural, fractional and (positive) real numbers (which are constructed in that succession). Unfortunately I rigourously could only proceed till the counterpart of the fractional numbers (the fractional trees). Which are already quite difficult, but at least I could find an algorithm (and indeed programmed it) that tests 2 such numbers on equality. The numbers are of the form [a<sub>1</sub>,...,a<sub>m</sub>]<sup>-1</sup>[b<sub>1</sub>,...,b<sub>n</sub>], where a<sub>1</sub>,...,a<sub>m</sub>,b<sub>1</sub>,...,b<sub>n</sub> are recursively again such numbers and the brackets denote multisets, i.e sequences where order doesnt matter. I did not intend this "fraction" kind of structure (i.e. a<sup>-1</sup>b, note that multiplication is associative but not commutative here) it emerged from the algebraic rules. There is even a (difficult) kind of canceling in these fractions.

The counterpart of the positive real numbers (which however is essential to define the continuous higher operations, because for example on the fractional numbers you can not even define square roots) is quite unmature (and thatswhy not included in the given document) but has in a strange way to do with the (analytic) iteration of real functions (and maybe here join tetration and iteration again). Thatswhy I am currently so interested in the topic analytic iteration ;)
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Mon Nov 20, 2006 1:07 am

bo198214 wrote:This is in continuation of the thread The exponential power series. It should also be mentioned that there exists already the thread Tetration and other large number sequences.

quickfur wrote:Now, this is an area that I'm deeply interested in.

Hehey, we are beating the 3, regardless whether as number of dimensions or as number of operations!

Heh. Never thought of it that way. :)

For a long time, I've been interested in generalizing the exponentiation function to tetration
.
Ok, here I should add some notes.

1. Recently Galidakis showed that there is a continuous (even arbitrary times differentiable) extension of the higher operations defined by the original Ackermann function. The problem with the extension is that it is quite arbitrary, a uniqueness criterion is lacking.

Interesting. Fortunately, at least in the scope of my original considerations, uniqueness is not a big problem, because the relative behaviour of functions at x->infinity is governed by the large-scale behaviour rather than the details (which determines uniqueness). As far as I'm concerned, I could just as well consider functions from the natural numbers to real numbers and my analyses would still yield similar results. (More on this later.)

2. I dont see a reasonable connection, regarding a continuous extension, between the extension of x^(x^...) and the extension of e^(e^....x). The first one has nearly no clues how to define for example half a tetration, while the latter has a clue, i.e. f(f(x)) = e<sup>x</sup>.

Yes, the breakdown of associativity causes a lot of conventional properties to be absent.

Sometimes I wonder, though, if our expectation of such conventional behaviour could be the reason we haven't even the vaguest idea how to visualize, say, a non-constructible set implied by the axiom of choice. (For example, the nature of the infinite convolutions in a Banach-Tarski paradoxical decomposition.)

3. If we would of course define f(x):=x<sup>x</sup> then we could try to find the analytic iterates (as introduced in my previous post), because f is analytic and has a fixed point at 1. Look at:
f(f(x)) = (x<sup>x</sup>)<sup>x<sup>x</sup></sup> = x<sup>x<sup>x</sup>x</sup>
f(f(f(x))) = x<sup>x<sup>x<sup>x</sup>x</sup></sup><sup>x<sup>x</sup>x</sup>
We see that the tower is n+1 high, for n iterations of f. Iteration of f can also be done by iteration of g(x):=xe<sup>x</sup> (which is the inversion of the famous Lambert W function), if we notice that g = ln o f o exp and thatswhy g<sup>n</sup> = ln o f<sup>n</sup> o exp. g has even the desired g(0)=0 standard fixed point. The analytic iterate should be unique, though I didnt elabortate yet whether the previously given analytic iteration formula converges for g at 0. If not then at least for each iterate is a unique analytic function defined on x>0 that approximates the analytic iteration power series in 0.
The iteration of f was indeed regarded already by Frappier in "Iteration of a kind of exponentials", 1990, though it seems he didnt show neither uniqueness nor analyticity of the iterates. So this work shall be done by the one or another.

Hmm. You're right, a continuous extension of tetration would be nontrivial. Hmm. In my case, though, all I need is one such extension with the property that its values at natural numbers coincide with the tetration of naturals.

4. e<sup>x</sup> has a family of real analytic iterates (as showed by Kneser already 1949), unfortunately these are not unique either, because the development is around a complex fixed point. The quite reputable mathematician Szekeres devoted much time to finding a "best" iterate, but I would say still unsuccessfully (see his "Abel's equations and regular growth ....", 1998 (and notice that he was born 1912!)).

Interesting.

Why havent we such uniqueness problems for the lower operations multiplication, and exponentiation?

The short answer is because (nm)x = n(mx) and x<sup>nm</sup> = (x<sup>n</sup>)<sup>m</sup>. Because by this law we have a clue how to define (1/n)x or x<sup>1/n</sup>. This law should then be also valid on the extension the rational numbers: x=x<sup>(1/n)n</sup> = (x<sup>1/n</sup>)<sup>n</sup> and hence it is clear that x<sup>1/n</sup> must be the inversion function of x<sup>n</sup>. Similarly for the multiplication, though not so interesting.

Unfortunately this law is no more valid for tetration, i.e. usually x^^(nm) != (x^^n)^^m, because the parenthesizing is different (which matters for x<sup>y</sup> in contrary to multiplication and addition, which are associative).

Yeah, non-associative functions are really, really hard to handle. I wish somebody would explore this more and relate it in terms that can be understood without requiring a PhD in math. ;)

To come out of this dilemma I invented a new number domain "arborescent numbers" in which this law is valid for all higher operations. And I am frolicking that one time all the higher operations have a unique continous extension there. The current state of my research can be found here.

Very interesting. I haven't taken a closer look at this yet, but this is certainly a very interesting way to approach the problem. :)

Anyway, since we're sharing research here, maybe I can take the liberty to talk about how I independently "discovered" transfinite and infinitesimal real numbers long before I knew about the research surrounding hyperreals.

The motivation behind all of this came from computer science classes talking about the complexity of algorithms: O(n), O(log n), O(n<sup>2</sup>), etc.. Obviously, these are closely related to the corresponding real functions x, ln x, x<sup>2</sup>, etc.. I learned that O(log n) grows slower than O(sqrt(n)), and immediately the question arises: at what point does x<sup>1/n</sup> become slower than ln x? Then I learned that O(2<sup>n</sup>) is much worse than O(n<sup>3</sup>) or O(n<sup>4</sup>), and again, the question arises: at what point does x<sup>n</sup> surpass 2<sup>x</sup>?

Of course, the answer is that ln x is sub-polynomial, in that it is asymptotically slower than x<sup>s</sup> for all s>0 (where s is any positive real number), so you could take x<sup>0.0000001</sup> and it still would exceed ln x eventually. Correspondingly, 2<sup>x</sup> (or any exponential, for that matter), is super-polynomial and is asymptotically faster than all x<sup>s</sup> for all real s, so even 1.000000001<sup>x</sup> would eventually outgrow x<sup>100000000</sup>.

Now, this is just the beginning of the fun. It can be shown that all polynomials of degree d behaves asymptotically like x<sup>d</sup>, and this can be generalized to real d. Because of this, I chose to represent the "magnitude" of the polynomial as the real number d. Accordingly, (generalized) polynomials with negative degree would have negative magnitudes, which is interpreted as the rate at which it shrinks to 0 at x=infinity. In the same vein, e<sup>-x</sup> is a super-polynomial negative magnitude, which shrinks to 0 much faster than any inverse power x<sup>1/d</sup>. Dually, this means that 1/ln x is a slower shrinking function than any inverse power of x.

What gets more interesting is what happens with the magnitude when you multiply functions. Multiplying x<sup>r</sup> by x<sup>s</sup> yields x<sup>r+s</sup>, which has degree (r+s). This result can be proven to work for all generalized polynomials as well. Furthermore, for generalized polynomials, function composition corresponds with the multiplication of the degree: for polynomials p(x) and q(x) with degrees m and n, p(q(x)) behaves like a polynomial with degree m*n. So we may say that the magnitudes of polynomial functions correspond with the real numbers with addition and multiplication.

Bringing exponentials and logarithms into the picture, then, we may regard the magnitude of ln x, which I denote as lambda (I'll use L here to get around phpBB's limitation), as an "infinitesimal", since L is less than r for all real numbers r>0. (L lies between 0 and all positive real numbers.) Similarly, the magnitude of e<sup>x</sup>, E (or xi), is an "infinite number". L and E are inverses, since the composition of the logarithm and the exponential is the constant function, L*E = 0. (0 denotes the magnitude of the polynomial with degree 0, i.e., the constant function).

Now, the "multiplication" of magnitudes is not a single-valued function for non-polynomial magnitudes, so the correspondence should not be misconstrued to be an isomorphism. Also, it is non-commutative, since function composition is non-commutative. Regardless, some interesting things can be observed: ln(ln x) corresponds with L*L, and it should be easy to see that L*L is less than L (i.e., ln(ln x) grows slower than ln x), and, in general, L<sup>n</sup> is less than L<sup>m</sup> if n > m. Also, it is relatively easily proven that (ln x)<sup>n</sup> is still sub-polynomial, although greater than ln x. Hence, there is a whole hierarchy of sub-polynomial magnitudes, all of which lie between 0 and all positive real numbers. It can be easily proven that x ln x, which corresponds with (1+L), lies between 1 and all reals greater than 1. Similar results can be obtained for the magnitudes related to L. These must therefore correspond with infinitesimals.

The exponentials are a lot more interesting. It is easily seen that exponentiating a polynomial causes its non-degree terms to become asymptotically significant: x<sup>2</sup> and x<sup>2</sup>+x behave the same asymptotically, but e<sup>x<sup>2</sup></sup> is asymptotically less than e<sup>x<sup>2</sup>+x</sup>. (In this sense, the "multiplication" of magnitudes is multivalued when exponentials are involved.) The exponential e<sup>x</sup> is not at all the "smallest" infinite magnitude, since compositions such as e<sup>(ln x)^2</sup> is smaller than all e<sup>x<sup>r</sup></sup> for all positive r, yet it is super-polynomial.

Now, the L's are by no means the only infinitesimals... the inverse of tetration, which corresponds with the iterated log in statistics, yields a sub-polynomial magnitude that is "infinitely" smaller than any magnitude derived from L. Since Ackermann has proven to us that there is an infinite hierarchy of iterated operations, there must correspondingly be an infinite hierarchy of inverses which yield an infinite hierarchy of infinitesimal magnitudes. In fact, the diagonal Ackermann function A'(n) = A(n,n) is a "super operation" whose magnitude must be greater than the magnitudes of all the iterated operations. The inverse of A'(n) would yield a magnitude so small it is less than all infinitesimal magnitudes corresponding with the iterated operations.

This can be continued indefinitely: one could iterate A'(n) to generate greater functions, and then diagonalize them to generate A''(n), a super-Ackermann function, then iterate *that*, ad infinitum. These would generate their corresponding inverses, which would be functions converging more and more infinitely slower to 0.

I highly suspect that the magnitudes generated this way would result in a set significantly larger than the set of real numbers. But one can never be too sure about anything when dealing with uncountable cardinals without actually working through the proofs, so who knows. ;)
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Tue Nov 21, 2006 2:00 pm

quickfur wrote:Yeah, non-associative functions are really, really hard to handle. I wish somebody would explore this more and relate it in terms that can be understood without requiring a PhD in math. ;)

I think in contrary that the concept of non-associative operations is quite more natural. If you have a binary operation and an expression with that operation, then you quite naturaly would this express as a binary tree (i.e. the parse tree a compiler would create).
For example
associative:
4 *' x = x * x * x* x

non-associative:
((1,1),(1,1)) *' x = ( x * x) * ( x * x)
(1,(1,(1,1))) *' x = x * ( x * ( x * x )))


Where *' would denote the successor operation of the operation *. For example multiplication is successor operation of addition and taking power is the successor operation of multiplication. So if you use binary trees instead of natural numbers we can also handle non-associative operations. The natural numbers are the associative binary trees. And the idea of trees/expressions is the most general of one can think of (in computer science (for example in lisp lists are also regarded as special trees) and in universal algebra).

And so we can define arbitrary high operations on the binary trees, in an unambigous way regarding parenthesizing.
The addition of the binary trees a and b would simply be (a,b). And from this start we can define multiplication as the successor of addition and so on. Formally we would define the successor operation recursively as:
1 *' x = x
(a,b) *' x = (a *' x) * (b *' x)

And then we have the hierarchy a %<sub>0</sub> x = (a,x) and a %<sub>n+1</sub> x = a %<sub>n</sub>' x.

Its also easy to see that if we assign to the binary tree x its "associative tree", i.e. the number of 1's in it, |x| then we have:
|a %<sub>0</sub> b| = |a| + |b|
|a %<sub>1</sub> x| = |a||x|
|a %<sub>2</sub> x| = |x|<sup>|a|</sup>

and there is no operation on ^^ on the natural numbers such that:
|a %<sub>3</sub> x| = |x| ^^ |a|

Interestingly the multiplication xy:=x %<sub>1</sub> y (which corresponds to substituting each 1 in x by y) is again associative (though not commutative). And we have our beloved:

(ab) %<sub>n</sub> x = a %<sub>n</sub> ( b %<sub>n</sub> x)

for every higher operation (n>1). Which was the stumbling block for the unique continuous extension of the tetration.

Of course these binary trees are still discrete and not "continuous", it becomes interesting how to define a division and then later on convergence etc. on them. But as introduction this shall suffice.

Anyway, since we're sharing research here, maybe I can take the liberty to talk about how I independently "discovered" transfinite and infinitesimal real numbers long before I knew about the research surrounding hyperreals.


What gets more interesting is what happens with the magnitude when you multiply functions. Multiplying x<sup>r</sup> by x<sup>s</sup> yields x<sup>r+s</sup>, which has degree (r+s). This result can be proven to work for all generalized polynomials as well. Furthermore, for generalized polynomials, function composition corresponds with the multiplication of the degree: for polynomials p(x) and q(x) with degrees m and n, p(q(x)) behaves like a polynomial with degree m*n. So we may say that the magnitudes of polynomial functions correspond with the real numbers with addition and multiplication.

Bringing exponentials and logarithms into the picture, then, we may regard the magnitude of ln x, which I denote as lambda (I'll use L here to get around phpBB's limitation), as an "infinitesimal", since L is less than r for all real numbers r>0. (L lies between 0 and all positive real numbers.) Similarly, the magnitude of e<sup>x</sup>, E (or xi), is an "infinite number". L and E are inverses, since the composition of the logarithm and the exponential is the constant function, L*E = 0. (0 denotes the magnitude of the polynomial with degree 0, i.e., the constant function).

Slight mistake: L*E=1. Its the identity function not the constant function.
Yes and here we see also that the multiplication (which correspond to the concatenation) is associative but not commutative (except for the polynomials). On the other hand we have always distribution from left, i.e. for any pointwise defined operation + on functions, we have:
(f+g)o h = (foh) + (goh)
And indeed I consider such structures and call them coppice, a coppice is an algebraic structure with a constant 1, and operations addition (+), mutliplication (*) and inversion (~) such that 1,*,~ forms a group and the multiplication is distributive over the addition.
For example the function space where the addition is f<sup>g</sup>, the multiplication is the function composition and the inverse is the function inverse, is such a structure.

Hardy however showed that all function that are composed of +,-,*,/,exp,ln can be linearly ordered regarding their behaviour at infinity. I.e. for 2 such non equal functions f,g either is f(x)>g(x) for some x<sub>0</sub>, x>x<sub>0</sub> or vice versa. (Note that if we had included sin(x) this would not be the case.)

This can be continued indefinitely: one could iterate A'(n) to generate greater functions, and then diagonalize them to generate A''(n), a super-Ackermann function, then iterate *that*, ad infinitum. These would generate their corresponding inverses, which would be functions converging more and more infinitely slower to 0.
Yes this idea came to my mind several days back. If we have any (analytic) function then we would have the unique analytic iterates f<sup>s</sup>(x). And hence could construct a "higher" function F(x)=f<sup>x</sup>(x). (Though it is not that simple, moreover we had to demand f'(0)=1 and the new function F(x) would not always be analytic again in 0, etc, etc).

I highly suspect that the magnitudes generated this way would result in a set significantly larger than the set of real numbers. But one can never be too sure about anything when dealing with uncountable cardinals without actually working through the proofs, so who knows. ;)

Hm, if the functions would remain analytic then we can have no more magnitudes than analytic functions. Each analytic function can be described by all their derivations at one point, so we would have R<sup>N</sup> analytic functions which has same cardinality as R?
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Tue Nov 21, 2006 5:09 pm

bo198214 wrote:
quickfur wrote:Yeah, non-associative functions are really, really hard to handle. I wish somebody would explore this more and relate it in terms that can be understood without requiring a PhD in math. ;)

I think in contrary that the concept of non-associative operations is quite more natural. If you have a binary operation and an expression with that operation, then you quite naturaly would this express as a binary tree (i.e. the parse tree a compiler would create).
[...]
Of course these binary trees are still discrete and not "continuous", it becomes interesting how to define a division and then later on convergence etc. on them. But as introduction this shall suffice.

Ahh, I understand now. Very nice idea!

And yes, I've wondered about "continuous trees" before... I'm not sure how to define them, though; they'd be really hard to visualize! They're likely to be fractal-like objects without the self-similarity. Seems reminiscient of the kind of non-constructible sets the Axiom of Choice produces. :) Except that these would be constructible if we could define them properly. The behaviour of these things may even turn out to be a good way of visualizing a truly non-constructible set.

[...]What gets more interesting is what happens with the magnitude when you multiply functions. Multiplying x<sup>r</sup> by x<sup>s</sup> yields x<sup>r+s</sup>, which has degree (r+s). This result can be proven to work for all generalized polynomials as well. Furthermore, for generalized polynomials, function composition corresponds with the multiplication of the degree: for polynomials p(x) and q(x) with degrees m and n, p(q(x)) behaves like a polynomial with degree m*n. So we may say that the magnitudes of polynomial functions correspond with the real numbers with addition and multiplication.

Bringing exponentials and logarithms into the picture, then, we may regard the magnitude of ln x, which I denote as lambda (I'll use L here to get around phpBB's limitation), as an "infinitesimal", since L is less than r for all real numbers r>0. (L lies between 0 and all positive real numbers.) Similarly, the magnitude of e<sup>x</sup>, E (or xi), is an "infinite number". L and E are inverses, since the composition of the logarithm and the exponential is the constant function, L*E = 0. (0 denotes the magnitude of the polynomial with degree 0, i.e., the constant function).

Slight mistake: L*E=1. Its the identity function not the constant function.

Oh, right. Oops. :)

Yes and here we see also that the multiplication (which correspond to the concatenation) is associative but not commutative (except for the polynomials). On the other hand we have always distribution from left, i.e. for any pointwise defined operation + on functions, we have:
(f+g)o h = (foh) + (goh)
And indeed I consider such structures and call them coppice, a coppice is an algebraic structure with a constant 1, and operations addition (+), mutliplication (*) and inversion (~) such that 1,*,~ forms a group and the multiplication is distributive over the addition.
For example the function space where the addition is f<sup>g</sup>, the multiplication is the function composition and the inverse is the function inverse, is such a structure.

Interesting.

Hardy however showed that all function that are composed of +,-,*,/,exp,ln can be linearly ordered regarding their behaviour at infinity. I.e. for 2 such non equal functions f,g either is f(x)>g(x) for some x<sub>0</sub>, x>x<sub>0</sub> or vice versa. (Note that if we had included sin(x) this would not be the case.)

Yes, which is what I've discovered as well. My "magnitude numbers" are linearly-ordered, with the polynomials sitting between the infinitesimals and the transfinites. In fact, it can easily be seen that the polynomials may be regarded as a sort of special case of the exponentials: e<sup>r ln x</sup> = x<sup>r</sup>, and may be thought of as e<sup>x</sup> composed with r ln x which is one of the L magnitudes. The behaviour of e<sup>kx</sup> may be thought of as a "coset" of the polynomials of degree k under exponential composition.

In fact, I think any function that becomes monotonic after a finite point x=K has a magnitude that can be linearly ordered with respect to the polynomials, logarithms, and exponentials.

I highly suspect that the magnitudes generated this way would result in a set significantly larger than the set of real numbers. But one can never be too sure about anything when dealing with uncountable cardinals without actually working through the proofs, so who knows. ;)

Hm, if the functions would remain analytic then we can have no more magnitudes than analytic functions. Each analytic function can be described by all their derivations at one point, so we would have R<sup>N</sup> analytic functions which has same cardinality as R?

Hmm, you're right. I think what this shows us is that sets of cardinality c can be rearranged in a lot more bizarre ways than has ever occurred to us. :)
I've been trying for a long time to find a way of visualizing a set of cardinality 2<sup>c</sup>, but so far it completely eludes me.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Wed Nov 22, 2006 11:07 am

quickfur wrote:And yes, I've wondered about "continuous trees" before... I'm not sure how to define them, though; they'd be really hard to visualize! They're likely to be fractal-like objects without the self-similarity.

My approach is first to extend the trees by a division (these are indeed fractal-like objects without self-similarity). And then look further for topologies and convergence ;)

Seems reminiscient of the kind of non-constructible sets the Axiom of Choice produces. :) Except that these would be constructible if we could define them properly. The behaviour of these things may even turn out to be a good way of visualizing a truly non-constructible set.

I am not quite sure whether it is possible to visualize something for which it is impossible to give a definition. Its similar to visualize a non-computable function or so.

In fact, I think any function that becomes monotonic after a finite point x=K has a magnitude that can be linearly ordered with respect to the polynomials, logarithms, and exponentials.

Nono, dont let deceive you. For example the function x+sin(x) is strictly increasing, but is not comparable with x. Its not trivial to show the linear orderability of the Hardy functions and in fact I dont know even whether there exists an algorithm that decides for two expressions in +,-,*,/,exp,ln which one is finally greater than the other, or merely whether they are equal (but surely there is done somewhere research about it).

I've been trying for a long time to find a way of visualizing a set of cardinality 2<sup>c</sup>, but so far it completely eludes me.

Isnt this the set of all function R->{0,1}? Perhaps you must explain a bit more about your non-constructible sets and visualization.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Wed Nov 22, 2006 6:10 pm

bo198214 wrote:
quickfur wrote:And yes, I've wondered about "continuous trees" before... I'm not sure how to define them, though; they'd be really hard to visualize! They're likely to be fractal-like objects without the self-similarity.

My approach is first to extend the trees by a division (these are indeed fractal-like objects without self-similarity). And then look further for topologies and convergence ;)

Well, good luck. :)

Seems reminiscient of the kind of non-constructible sets the Axiom of Choice produces. :) Except that these would be constructible if we could define them properly. The behaviour of these things may even turn out to be a good way of visualizing a truly non-constructible set.

I am not quite sure whether it is possible to visualize something for which it is impossible to give a definition. Its similar to visualize a non-computable function or so.

The Busy Beaver numbers are noncomputable (in the sense that you cannot compute them with an algorithm), but they have been manually "computed" (in the sense of humans proving certain things about them in a way no algorithm can), so I think it is possible to visualize such things, even if incompletely.

In fact, I think any function that becomes monotonic after a finite point x=K has a magnitude that can be linearly ordered with respect to the polynomials, logarithms, and exponentials.

Nono, dont let deceive you. For example the function x+sin(x) is strictly increasing, but is not comparable with x.

Hmm, you are right. I did try to address this before, but it's messy and doesn't extend well to other similar cases. The main obeservation is to note that while x+sin(x) is non-comparable with x, it does share the property that everything that surpasses x will also surpass x+sin(x), and everything that x surpasses will also be surpassed by x+sin(x), and thus you can define an equivalence class that contains both. Unfortunately, this trick only works for certain functions like the trigonometric functions; there are others for which this doesn't work.

Its not trivial to show the linear orderability of the Hardy functions and in fact I dont know even whether there exists an algorithm that decides for two expressions in +,-,*,/,exp,ln which one is finally greater than the other, or merely whether they are equal (but surely there is done somewhere research about it).

Oh?? I'm pretty sure I have an algorithm that covers most (if not all) of the cases... which is the whole motivation behind my research into "magnitude numbers" in the first place!

For polynomials, it's pretty obvious how to order two given functions; with polynomials mixed with logarithms, you just treat the logarithms as infinitesimals (and there's a well-defined way to compare different logarithmic expressions themselves). With exponentials it gets a bit tricky, but it's still doable, at least as far as I've found. I may have missed a few cases, but those can probably be easily added to the list of cases to check for.

I've been trying for a long time to find a way of visualizing a set of cardinality 2<sup>c</sup>, but so far it completely eludes me.

Isnt this the set of all function R->{0,1}? Perhaps you must explain a bit more about your non-constructible sets and visualization.

Yes, that's the set. But trying to visualize the members of that set is non-trivial... since, cardinality-wise, there are infinitely more non-analytic sets than otherwise, and those things can be extremely convoluted and fractal-like, or worse.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Wed Nov 22, 2006 7:37 pm

quickfur wrote:The Busy Beaver numbers are noncomputable (in the sense that you cannot compute them with an algorithm), but they have been manually "computed" (in the sense of humans proving certain things about them in a way no algorithm can), so I think it is possible to visualize such things, even if incompletely.

Ok, noncomputability does not mean non-definability. How is this for non-constructible sets, can you give an example?
Do you anyway know of an easy (to formulate, not like the goedelization of formulas ...) numbertheoretic question, that is not decidable?

Hmm, you are right. I did try to address this before, but it's messy and doesn't extend well to other similar cases. The main obeservation is to note that while x+sin(x) is non-comparable with x, it does share the property that everything that surpasses x will also surpass x+sin(x)

No, x+0.25 surpasses x but not x+sin(x). If you of course only regard f>g if g/f -> 0 for x->oo. Then thats another question. (We have two "surpasses" ideas in circulation. Hardy's is f>g if there exists x<sub>0</sub> such that f(x)>g(x) for each x>x<sub>0</sub>, the other is g/f->0 for x->oo. In the latter two functions are regarded equivalent if there is a constant C and an x<sub>0</sub> such that C>|ln(f(x))-ln(g(x))| for each x>x<sub>0</sub>, I think)


Its not trivial to show the linear orderability of the Hardy functions and in fact I dont know even whether there exists an algorithm that decides for two expressions in +,-,*,/,exp,ln which one is finally greater than the other, or merely whether they are equal (but surely there is done somewhere research about it).

Oh?? I'm pretty sure I have an algorithm that covers most (if not all) of the cases... which is the whole motivation behind my research into "magnitude numbers" in the first place!

Hm, perhaps again because the intermixing of both "surpasses" ideas.
But consider for example f(x)=e^(ln(x)+1)+e^(ln(x)+3) and g(x)=2e^(ln(x)+2). Which is finally greater?

and those things can be extremely convoluted and fractal-like, or worse.
Poor quickfur ;) :P
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Wed Nov 22, 2006 9:11 pm

bo198214 wrote:
quickfur wrote:The Busy Beaver numbers are noncomputable (in the sense that you cannot compute them with an algorithm), but they have been manually "computed" (in the sense of humans proving certain things about them in a way no algorithm can), so I think it is possible to visualize such things, even if incompletely.

Ok, noncomputability does not mean non-definability. How is this for non-constructible sets, can you give an example?
Do you anyway know of an easy (to formulate, not like the goedelization of formulas ...) numbertheoretic question, that is not decidable?

Hmm. Maybe I'm mixing up my definitions here. Nevermind. :P

Hmm, you are right. I did try to address this before, but it's messy and doesn't extend well to other similar cases. The main obeservation is to note that while x+sin(x) is non-comparable with x, it does share the property that everything that surpasses x will also surpass x+sin(x)

No, x+0.25 surpasses x but not x+sin(x). If you of course only regard f>g if g/f -> 0 for x->oo. Then thats another question. (We have two "surpasses" ideas in circulation. Hardy's is f>g if there exists x<sub>0</sub> such that f(x)>g(x) for each x>x<sub>0</sub>, the other is g/f->0 for x->oo. In the latter two functions are regarded equivalent if there is a constant C and an x<sub>0</sub> such that C>|ln(f(x))-ln(g(x))| for each x>x<sub>0</sub>, I think)

Ohhh, I think that's where we got mixed up. I'm using the big-O definition, so the criterion is lim |f(x)/g(x)| -> infinity if f > g. I can see why things would become extremely hard to handle if you're using the other criterion.

Its not trivial to show the linear orderability of the Hardy functions and in fact I dont know even whether there exists an algorithm that decides for two expressions in +,-,*,/,exp,ln which one is finally greater than the other, or merely whether they are equal (but surely there is done somewhere research about it).

Oh?? I'm pretty sure I have an algorithm that covers most (if not all) of the cases... which is the whole motivation behind my research into "magnitude numbers" in the first place!

Hm, perhaps again because the intermixing of both "surpasses" ideas.

Yeah, I think that's the mixup here.

But consider for example f(x)=e^(ln(x)+1)+e^(ln(x)+3) and g(x)=2e^(ln(x)+2). Which is finally greater?

Well, in this particular case, f(x) reduces to (e+e<sup>3</sup>)x and g(x) reduces to 2e<sup>2</sup>x, so the comparison is trivial. :) But I see your point: using Hardy's definition it becomes much harder to compare these things.

and those things can be extremely convoluted and fractal-like, or worse.
Poor quickfur ;) :P

The basic problem is that we mortal beings really have no idea of just how huge the cardinality of the continuum is. Our limited perception at most approximates the rational numbers, and deep down, we still think of the difference between Q and R simply as Q having some scattered holes in between set members and R filling them in individually. In actuality, R is so huge that the number of holes is actually |R|: there are infinitely more holes than numbers! It is instructive to note that the cardinality of the set of cosets of Q in R under addition (i.e., cosets of the form {q+r|q in Q, r in R}) is the cardinality of R itself. That is to say, Q is so sparse compared to R that it takes |R| copies of Q to fill in all the holes (i.e., Q is less than a drop in an ocean).

Because of this, it's very hard for us to grasp what a truly random function on R is---which is what you'd get if you randomly picked an element from 2<sup>R</sup>. It's so random that its graph might as well be an amorphous mist of scattered points. (After all, the number of non-continuous functions is uncountably larger than the number of continuous functions. You'd almost never find a continuous function if you picked one randomly from 2<sup>R</sup>.)
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Wed Nov 22, 2006 9:51 pm

quickfur wrote:
bo198214 wrote:Ok, noncomputability does not mean non-definability. How is this for non-constructible sets, can you give an example?
Do you anyway know of an easy (to formulate, not like the goedelization of formulas ...) numbertheoretic question, that is not decidable?

Hmm. Maybe I'm mixing up my definitions here. Nevermind. :P

Nono, you are completely right, it was my fault to identify constructability with definability. But do you know anyway of such a numbertheoretic question? And an example for a definable but non-constructable set?

But consider for example f(x)=e^(ln(x)+1)+e^(ln(x)+3) and g(x)=2e^(ln(x)+2). Which is finally greater?

Well, in this particular case, f(x) reduces to (e+e<sup>3</sup>)x and g(x) reduces to 2e<sup>2</sup>x, so the comparison is trivial. :) But I see your point: using Hardy's definition it becomes much harder to compare these things.


Oh damn, I swapped exp and ln! What I meant was
f(x)=ln(e<sup>x</sup>+1) + ln(e<sup>x</sup>+3)
g(x)=2ln(e<sup>x</sup>+2)
But thats probably also not sufficient to irritate you ;) so I change it into
f(x)=ln(e<sup>x</sup>+1)ln(e<sup>x</sup>+3)
g(x)=(ln(e<sup>x</sup>+2))<sup>2</sup>

The basic problem is that we mortal beings really have no idea of just how huge the cardinality of the continuum is.

Hm, I am anyway a fan of countable things. If I remember right there are countable models for set theory (instead of sets just consider algorithms that decide whether an element is in it).
For example if you consider the real numbers as algorithms producing a decimal (or binary) fraction (constructive analysis). Then you have only countable many real numbers (numbers of possible algorithms/strings).
But the problem is not resolved thereby, but only translated!
Because you can not decide whether two such algorithms are equal. And hence you can not enumerate representants of real numbers (which is somewhat the same as counting the real numbers in the standard model).
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Wed Nov 22, 2006 10:37 pm

bo198214 wrote:
quickfur wrote:
bo198214 wrote:Ok, noncomputability does not mean non-definability. How is this for non-constructible sets, can you give an example?
Do you anyway know of an easy (to formulate, not like the goedelization of formulas ...) numbertheoretic question, that is not decidable?

Hmm. Maybe I'm mixing up my definitions here. Nevermind. :P

Nono, you are completely right, it was my fault to identify constructability with definability. But do you know anyway of such a numbertheoretic question? And an example for a definable but non-constructable set?

Hmm. I'm not familiar enough with number theory to come up with such a set off the top of my head.

But consider for example f(x)=e^(ln(x)+1)+e^(ln(x)+3) and g(x)=2e^(ln(x)+2). Which is finally greater?

Well, in this particular case, f(x) reduces to (e+e<sup>3</sup>)x and g(x) reduces to 2e<sup>2</sup>x, so the comparison is trivial. :) But I see your point: using Hardy's definition it becomes much harder to compare these things.


Oh damn, I swapped exp and ln! What I meant was
f(x)=ln(e<sup>x</sup>+1) + ln(e<sup>x</sup>+3)
g(x)=2ln(e<sup>x</sup>+2)
But thats probably also not sufficient to irritate you ;) so I change it into
f(x)=ln(e<sup>x</sup>+1)ln(e<sup>x</sup>+3)
g(x)=(ln(e<sup>x</sup>+2))<sup>2</sup>

Hehe... I'm not sure if you're looking for Hardy's definition of comparison, or the f/g definition. But using the f/g definition, f(x)/g(x) will converge to a constant at x=oo, according to my guess. If you can prove me wrong, I'll be happy to know. ;) According to my definition, I consider non-zero constant ratios to belong to the same equivalence class, such that 2x and x are equivalent, even though value-wise they are not the same. (Remember that I'm generalizing from big-O notation, so constants and coefficients aren't important.) Now if you're asking if the ratio of f/g is equal to, greater, or less than 1, then that's a different kettle of fish. :)

The basic problem is that we mortal beings really have no idea of just how huge the cardinality of the continuum is.

Hm, I am anyway a fan of countable things. If I remember right there are countable models for set theory (instead of sets just consider algorithms that decide whether an element is in it).

Wouldn't this just reduce your set theoretic universe to computable sets?

For example if you consider the real numbers as algorithms producing a decimal (or binary) fraction (constructive analysis). Then you have only countable many real numbers (numbers of possible algorithms/strings).

This is an interesting approach. So even transcendentals like e and pi are still included, yet your set is still countable? Very clever idea.

But the problem is not resolved thereby, but only translated!
Because you can not decide whether two such algorithms are equal. And hence you can not enumerate representants of real numbers (which is somewhat the same as counting the real numbers in the standard model).

I suspect this problem is equivalent to the halting problem, right?
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Thu Nov 23, 2006 4:29 pm

quickfur wrote:Hmm. I'm not familiar enough with number theory to come up with such a set off the top of my head.

I thought you would know, because Goedel constructed (to prove his uncomleteness proposition) numbertheoretic assertion, that couldnt be proved wrong or right. But every normal number theoretic question would puzzle me if it were unprovable. So you are the knower of logic and set theory, so I thought, there were also provided some such examples.

f(x)=ln(e<sup>x</sup>+1)ln(e<sup>x</sup>+3)
g(x)=(ln(e<sup>x</sup>+2))<sup>2</sup>

Hehe... I'm not sure if you're looking for Hardy's definition of comparison, or the f/g definition. But using the f/g definition, f(x)/g(x) will converge to a constant at x=oo, according to my guess. If you can prove me wrong, I'll be happy to know. ;) According to my definition, I consider non-zero constant ratios to belong to the same equivalence class, such that 2x and x are equivalent, even though value-wise they are not the same. (Remember that I'm generalizing from big-O notation, so constants and coefficients aren't important.) Now if you're asking if the ratio of f/g is equal to, greater, or less than 1, then that's a different kettle of fish. :)

Well, I would guess that ln(e^x+c)/x -> 1. Because obviously is ln(e^x+c) > x but it is bounded above by ln(e^x+e^x)=ln(2e^x)=ln(2)+x, and (ln(2)+x)/x -> 1. So f/g -> 1.
But my intention for asking you, was that you take your alorigthm out of the knapsack apply it to the expressions and *in that way* decide which function is greater!

If I remember right there are countable models for set theory (instead of sets just consider algorithms that decide whether an element is in it).

Wouldn't this just reduce your set theoretic universe to computable sets?

Look for "countable" in Wikipedia Model theory. Am also no expert in model theory.

But the problem is not resolved thereby, but only translated!
Because you can not decide whether two such algorithms are equal. And hence you can not enumerate representants of real numbers (which is somewhat the same as counting the real numbers in the standard model).

I suspect this problem is equivalent to the halting problem, right?

Hm, not exactly, I think. Am not completely sure whether you can enumerate all algorithms that halt for each input. Something in that direction.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Thu Nov 23, 2006 4:52 pm

bo198214 wrote:
quickfur wrote:
f(x)=ln(e<sup>x</sup>+1)ln(e<sup>x</sup>+3)
g(x)=(ln(e<sup>x</sup>+2))<sup>2</sup>

Hehe... I'm not sure if you're looking for Hardy's definition of comparison, or the f/g definition. But using the f/g definition, f(x)/g(x) will converge to a constant at x=oo, according to my guess. If you can prove me wrong, I'll be happy to know. ;) According to my definition, I consider non-zero constant ratios to belong to the same equivalence class, such that 2x and x are equivalent, even though value-wise they are not the same. (Remember that I'm generalizing from big-O notation, so constants and coefficients aren't important.) Now if you're asking if the ratio of f/g is equal to, greater, or less than 1, then that's a different kettle of fish. :)

Well, I would guess that ln(e^x+c)/x -> 1. Because obviously is ln(e^x+c) > x but it is bounded above by ln(e^x+e^x)=ln(2e^x)=ln(2)+x, and (ln(2)+x)/x -> 1. So f/g -> 1.
But my intention for asking you, was that you take your alorigthm out of the knapsack apply it to the expressions and *in that way* decide which function is greater!

Well, I'm interested in the behaviour of growth rates rather than specific function values, so by my definition, f and g are equivalent.

More interesting things can be said about functions like e<sup>(ln x)<sup>2</sup></sup>, though, which lie "between" the polynomials and the exponentials. Or, for that matter, e<sup>(ln x)(ln ln x)</sup>, or any of the functions involving (finite) iterates of the logarithm.

But the problem is not resolved thereby, but only translated!
Because you can not decide whether two such algorithms are equal. And hence you can not enumerate representants of real numbers (which is somewhat the same as counting the real numbers in the standard model).

I suspect this problem is equivalent to the halting problem, right?

Hm, not exactly, I think. Am not completely sure whether you can enumerate all algorithms that halt for each input. Something in that direction.

The halting problem is undecidable. I was just wondering if you can reduce the comparison of two real number algorithms to the halting problem.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Postby PWrong » Fri Nov 24, 2006 7:45 am

Well, I'm interested in the behaviour of growth rates rather than specific function values, so by my definition, f and g are equivalent.


I wonder if there is a norm || || on the space of continuous functions, that usually doesn't give infinity, such that ||f(x)|| > ||g(x)|| iff f grows faster than g. Is that the sort of thing you're looking for?
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby quickfur » Fri Nov 24, 2006 5:31 pm

PWrong wrote:
Well, I'm interested in the behaviour of growth rates rather than specific function values, so by my definition, f and g are equivalent.


I wonder if there is a norm || || on the space of continuous functions, that usually doesn't give infinity, such that ||f(x)|| > ||g(x)|| iff f grows faster than g. Is that the sort of thing you're looking for?

Yes. Your norms are what I call the "magnitude numbers". But I found that this norm cannot be adequately represented by the real numbers once you include super/sub-polynomial functions such as exponentials and logarithms. Instead, the respective norms of the latter must behave like infinite and infinitesimal numbers.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Sat Nov 25, 2006 4:25 pm

quickfur wrote:
bo198214 wrote:Well, I would guess that ln(e^x+c)/x -> 1. Because obviously is ln(e^x+c) > x but it is bounded above by ln(e^x+e^x)=ln(2e^x)=ln(2)+x, and (ln(2)+x)/x -> 1. So f/g -> 1.
But my intention for asking you, was that you take your alorigthm out of the knapsack apply it to the expressions and *in that way* decide which function is greater!

Well, I'm interested in the behaviour of growth rates rather than specific function values, so by my definition, f and g are equivalent.

I think I was talking about growth rates. f and g have the same growth rate in the O-sense. And you couldnt supply yet an algorithm that decides for two expressions which one grows faster or whether they grow O-equally. The example was not intended for human reasoning but for application of an algorithm. To know that they are O-equivalent was intended only for verifying the algorithm.

Apart from the "O-sense" there are of course many more possibilites to compare the growth of two functions. Only as one (additional to the Hardy definition) example, you could define g > f as g-f -> oo. Where two functions have the same growth if |g-f| < c. This is equivalent then to e<sup>g</sup>/e<sup>f</sup> -> oo. Analogously I could define g>f in another way as ln(g)/ln(f) -> oo, even yielding a whole comparison hierarchy ln(...(ln(g))...)/ln(...(ln(f))...) -> oo. But they all have in common that they satisfy Hardy's condition for g > f, whereas more and more functions become identified. For example in ln(g)/ln(f) -> oo, already g=x<sup>2</sup> and f=x<sup>3</sup> are identified because oo > 3/2,2/3 . And in ln(ln(g))/ln(ln(f)) -> oo the functions g=x<sup>x<sup>2</sup></sup>, f=x<sup>x<sup>3</sup></sup> become identified, and maybe in even another growth rate definition x and e<sup>x</sup> become identified. So I would regard Hardy's comparison as quite unarbitrary, in contrary to the other growth rate defintions I know of.

The halting problem is undecidable. I was just wondering if you can reduce the comparison of two real number algorithms to the halting problem.
It doesnt look so for me. If I could decide the halting problem why would I then be able to decide the equality of two given algorithms? Or merely whether one algorithm halts for each input? Thats a n order more difficult and thathswhy I guess that the algorithms which halt for each input are not even enumerable (which is a weaker demand then to be decidable). But I admit I was once a bit more versatile in recursion theory than my current humble guesses ;)

PWrong wrote:I wonder if there is a norm || || on the space of continuous functions, that usually doesn't give infinity, such that ||f(x)|| > ||g(x)|| iff f grows faster than g. Is that the sort of thing you're looking for?

If you only take continuum many functions (for example only analytic functions) which allow a linear order, such a map to the reals would exist. Though it would not really behave. At least it can not be ||fog||=||f||||g||, because then ||fog||=||gof|| and if we chose f(x)=x<sup>2</sup> and g(x)=e<sup>x</sup>, then gof=e<sup>x<sup>2</sup></sup> is surely greater than fog=e<sup>2x</sup>.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Sat Nov 25, 2006 5:17 pm

bo198214 wrote:
quickfur wrote:
bo198214 wrote:Well, I would guess that ln(e^x+c)/x -> 1. Because obviously is ln(e^x+c) > x but it is bounded above by ln(e^x+e^x)=ln(2e^x)=ln(2)+x, and (ln(2)+x)/x -> 1. So f/g -> 1.
But my intention for asking you, was that you take your alorigthm out of the knapsack apply it to the expressions and *in that way* decide which function is greater!

Well, I'm interested in the behaviour of growth rates rather than specific function values, so by my definition, f and g are equivalent.

I think I was talking about growth rates. f and g have the same growth rate in the O-sense. And you couldnt supply yet an algorithm that decides for two expressions which one grows faster or whether they grow O-equally. The example was not intended for human reasoning but for application of an algorithm. To know that they are O-equivalent was intended only for verifying the algorithm.
[...]

I didn't know you were asking for the specifics of the algorithm. I don't think I've actually sat down and formulated my method as a precise algorithm, but here are some of the key ingredients, at least as they apply to your examples:
- addition basically map to maximization: mag(a+b) = max(a,b). So mag(e<sup>x</sup> + C) = mag(e<sup>x</sup>) = E
- ln x is the inverse of e<sup>x</sup>, so mag(ln (e<sup>x</sup> + C)) = L*(max(E,0)) = L*E = 1.
- Now, (ln(f(x))<sup>2</sup>) = x<sup>2</sup> o ln f(x), and composition of sub-exponential maps to magnitude multiplication, so mag((ln(e<sup>x</sup>+C))<sup>2</sup>) = 2 * mag(ln(e<sup>x</sup>+C)) = 2* (L*max(E,0)) = 2*(L*E) = 2*1 = 2.
- Similarly, multiplication maps to addition of magnitude numbers, so mag(ln(e<sup>x</sup>+C)*ln(e<sup>x</sup>+D)) = L*(max(E,0)) + L*(max(E,0)) = L*E + L*E = 1 + 1 = 2.
- Therefore the two functions are O-equal.

Basically, the method of comparison is to assign magnitude numbers to the leaf nodes of the expression tree of the function (or if exponential composition is present, the smallest subtrees containing the compositions), and then recursively reduce it by known mappings of function operations to magnitude operations. This is very easy in the case of sub-exponential functions, but it gets a lot more hairy when exponential compositions are involved, because they do not map 1-to-1 to magnitude multiplication. Nevertheless, it is possible to evaluate most of these compositions.

The halting problem is undecidable. I was just wondering if you can reduce the comparison of two real number algorithms to the halting problem.
It doesnt look so for me. If I could decide the halting problem why would I then be able to decide the equality of two given algorithms? Or merely whether one algorithm halts for each input? Thats a n order more difficult and thathswhy I guess that the algorithms which halt for each input are not even enumerable (which is a weaker demand then to be decidable). But I admit I was once a bit more versatile in recursion theory than my current humble guesses ;)

The connection need not be straightforward. For example, all NP-complete problems can be reduced to each other, although on the surface they are very different! Similarly, many problems can be reduced to the halting problem, even if on the surface they are very different. I was just asking if you knew of any connection between the halting problem and the algorithm comparison problem.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Sun Nov 26, 2006 12:03 am

quickfur wrote:but here are some of the key ingredients, at least as they apply to your examples:
- addition basically map to maximization: mag(a+b) = max(a,b). So mag(e<sup>x</sup> + C) = mag(e<sup>x</sup>) = E
- ln x is the inverse of e<sup>x</sup>, so mag(ln (e<sup>x</sup> + C)) = L*(max(E,0)) = L*E = 1.
- Now, (ln(f(x))<sup>2</sup>) = x<sup>2</sup> o ln f(x), and composition of sub-exponential maps to magnitude multiplication, so mag((ln(e<sup>x</sup>+C))<sup>2</sup>) = 2 * mag(ln(e<sup>x</sup>+C)) = 2* (L*max(E,0)) = 2*(L*E) = 2*1 = 2.
- Similarly, multiplication maps to addition of magnitude numbers, so mag(ln(e<sup>x</sup>+C)*ln(e<sup>x</sup>+D)) = L*(max(E,0)) + L*(max(E,0)) = L*E + L*E = 1 + 1 = 2.
- Therefore the two functions are O-equal.

Yes, I awaited something like that :) Though you applied the composition rule, which you claimed to be valid only for polynomials. As I already explained in my previous post, this rule causes havoc:
mag(id<sup>2</sup> o exp) = 2 * E = E * 2 = mag(exp o id<sup>2</sup>)
which is definitly false, because even in O-comparison e<sup>x<sup>2</sup></sup> > e<sup>2x</sup>
Or even worse:
2*E = mag(id<sup>2</sup> o exp) = mag(exp o (2id)) = E * max(0,1) = E * 1

And how do you handle subtraction?
Which growth rate has f(x) = x - e<sup>x</sup>?
1 = mag(f + exp) = max(mag(f),mag(exp)) = max(mag(f),E)
It would follow E=1!

So either you have to make the magnitude numbers a very complicated algebraic structure or, .... you need a better algorithm, or both. I mean there must be some simplification to reasoning by hand, what grows faster. Otherwise those magnitude numbers are nothing else than other symbols for the same problem.

This is very easy in the case of sub-exponential functions, but it gets a lot more hairy when exponential compositions are involved, because they do not map 1-to-1 to magnitude multiplication. Nevertheless, it is possible to evaluate most of these compositions.

In the example was exponential composition involved! So when is it allowed and when not?

The connection need not be straightforward. For example, all NP-complete problems can be reduced to each other, although on the surface they are very different! Similarly, many problems can be reduced to the halting problem, even if on the surface they are very different.

Yes of course I know of.
I just explained my intuition that it can not reduce to the halting problem. Like the real numbers can enumerated with the natural numbers. I was tending to assert they take different positions in a hierarchy, see the Kleene hierarchy and the Turing degrees. Comparing two algorithms for each input, is one "for each" (or for the negation one "exists") quantifier more.

While the NP problems for example all have the structure "Does there exists one of 2<sup>N</sup> things with property P?" (P with polynomial complexity).
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Sun Nov 26, 2006 5:48 am

bo198214 wrote:
quickfur wrote:but here are some of the key ingredients, at least as they apply to your examples:
- addition basically map to maximization: mag(a+b) = max(a,b). So mag(e<sup>x</sup> + C) = mag(e<sup>x</sup>) = E
- ln x is the inverse of e<sup>x</sup>, so mag(ln (e<sup>x</sup> + C)) = L*(max(E,0)) = L*E = 1.
- Now, (ln(f(x))<sup>2</sup>) = x<sup>2</sup> o ln f(x), and composition of sub-exponential maps to magnitude multiplication, so mag((ln(e<sup>x</sup>+C))<sup>2</sup>) = 2 * mag(ln(e<sup>x</sup>+C)) = 2* (L*max(E,0)) = 2*(L*E) = 2*1 = 2.
- Similarly, multiplication maps to addition of magnitude numbers, so mag(ln(e<sup>x</sup>+C)*ln(e<sup>x</sup>+D)) = L*(max(E,0)) + L*(max(E,0)) = L*E + L*E = 1 + 1 = 2.
- Therefore the two functions are O-equal.

Yes, I awaited something like that :) Though you applied the composition rule, which you claimed to be valid only for polynomials. As I already explained in my previous post, this rule causes havoc:
mag(id<sup>2</sup> o exp) = 2 * E = E * 2 = mag(exp o id<sup>2</sup>)
which is definitly false, because even in O-comparison e<sup>x<sup>2</sup></sup> > e<sup>2x</sup>

You forgot that magnitude multiplication is non-commutative. 2*E is NOT equal to E*2. In general, E*k is greater than l*E for all polynomial l, if k > 1.

Or even worse:
2*E = mag(id<sup>2</sup> o exp) = mag(exp o (2id)) = E * max(0,1) = E * 1

Well, this was a case where I said you can't simplify the exponent argument first. :) You have to split sums in the exponent into products: e<sup>2x<sup>2</sup>+x+1</sup> = e<sup>2x<sup>2</sup></sup>*e<sup>x</sup>*e<sup>1</sup>. Then you split the coefficients into polynomial composition: (x<sup>2</sup> o e<sup>x<sup>2</sup></sup>)*(x<sup>0</sup>*e<sup>x</sup>)*e<sup>1</sup>, which gives you the magnitude 2*E*2 + 1*E + 0 = 2*E*2 + E. (Note that you cannot simplify 2*E*2 because multiplication is non-commutative, otherwise you get a contradiction.)

And how do you handle subtraction?
Which growth rate has f(x) = x - e<sup>x</sup>?
1 = mag(f + exp) = max(mag(f),mag(exp)) = max(mag(f),E)
It would follow E=1!

Um, the growth rate of x - e<sup>x</sup> is NOT 1 !!! The limit diverges to -infinity, which means that the magnitude is actually E. So there is no contradiction here.

So either you have to make the magnitude numbers a very complicated algebraic structure or, .... you need a better algorithm, or both. I mean there must be some simplification to reasoning by hand, what grows faster. Otherwise those magnitude numbers are nothing else than other symbols for the same problem.

Heh, maybe they are just symbols for the same problem. :) But the reason I studied them is not to simplify calculation, but to explore what kind of structure is created by different growth rates of functions.

This is very easy in the case of sub-exponential functions, but it gets a lot more hairy when exponential compositions are involved, because they do not map 1-to-1 to magnitude multiplication. Nevertheless, it is possible to evaluate most of these compositions.

In the example was exponential composition involved! So when is it allowed and when not?

Exponential composition has to be handled with care. First you go to the deepest nested exponent term, which in general has the form e<sup>p(x)</sup> where p(x) is some polynomial. (Logarithmic terms can appear in p(x), but for now we'll assume p(x) is polynomial.) First, because multiplication with the exponential magnitude has the property of "magnifying" the operations (+,*,o), you cannot simply assign a magnitude number to p(x). So you have to rewrite it in the form e<sup>kx<sup>r</sup></sup>*e<sup>lx<sup>s</sup></sup>*e<sup>mx<sup>t</sup></sup>*... . Then you turn the coefficients k, l, m, into power functions: e<sup>kx<sup>r</sup></sup> = x<sup>k</sup> o e<sup>x<sup>r</sup></sup>. Then you assign the magnitudes, which gives you a sort of "polynomial" in terms of E: k*E*r + l*E*s + m*E*t + ... etc.

Again, multiplication is non-commutative, so you cannot simplify these terms: k*E*r is not equal to E*kr or kr*E. But what you can do is to sort the terms in descending order of the right-hand coefficients. Assuming r > s > t, the k*E*r term will predominate the other terms. So if you have another function with a predominant term k'*E*r' and r>r', then you know that the second function is less than the first, because k*E*r > k'*E*r' for all k' when r > r'.

How does this simplify things? If both functions have magnitudes that expand to this form in terms of E, you only need to compare the predominant terms. Only if the predominant terms are equal, you look carefully at the smaller terms. In a sense, it's like a lexicographical sort.

With logarithms, more interesting things can happen, all with the observation that logarithm "de-magnifies" the operations, so that L*k = L. (But again, multiplication does not commute, so k*L != L*k. In fact, k*L > L if k>1.) So if the expansion gives you an expression like a*L*b*L*c*L*d, it can be reduced immediately to a*L<sup>3</sup>. And of course, if you get a sub-expression of the form L*k*E it reduces to 1, because L*k=L, and L*E = 1.

Now, in general, in the exponential "polynomial", you will get things that look like 5*E*(2+L) + 3*E*(1-2*L), etc.. The parenthesized expressions act as sort of an "index" into the exponential hierarchy E*M (where M is some magnitude number), with an index of L, you get the polynomial magnitudes. Which means that for every index, you get a structure sort of like a "coset" of the polynomials. The indices are themselves magnitude numbers, and so are sorted the same way. Comparing these things is just a matter of lexicographical sort, where you sort a*E*M in descending order by M, and then you compare coefficients lexicographically. Since M is linearly ordered, this comparison is well-defined, and quite simple in the general case where the predominant terms are unequal.

In a sense, M can be thought of as the "magnitude degree" (analogous to polynomial degree) of the magnitude, so if you get two magnitudes of unequal degree, you just compare degrees to get your answer. Only if the degrees are equal do you need to compare the coefficients on the first term. If that is also equal, then only you need to compare the smaller terms.

The connection need not be straightforward. For example, all NP-complete problems can be reduced to each other, although on the surface they are very different! Similarly, many problems can be reduced to the halting problem, even if on the surface they are very different.

Yes of course I know of.
I just explained my intuition that it can not reduce to the halting problem. Like the real numbers can enumerated with the natural numbers. I was tending to assert they take different positions in a hierarchy, see the Kleene hierarchy and the Turing degrees. Comparing two algorithms for each input, is one "for each" (or for the negation one "exists") quantifier more.

Ah, OK. I admit I know less of this stuff than I should, being a CS graduate. :)

While the NP problems for example all have the structure "Does there exists one of 2<sup>N</sup> things with property P?" (P with polynomial complexity).

Yes, I wasn't expecting NP problems to be related to the halting problem. :) I don't know about the Kleene hierarchy (maybe I didn't study well when I was still in school), so I don't know the different types of undecidable problems too well.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Sun Nov 26, 2006 12:17 pm

quickfur wrote:
And how do you handle subtraction?
Which growth rate has f(x) = x - e<sup>x</sup>?
1 = mag(f + exp) = max(mag(f),mag(exp)) = max(mag(f),E)
It would follow E=1!

Um, the growth rate of x - e<sup>x</sup> is NOT 1 !!! The limit diverges to -infinity, which means that the magnitude is actually E. So there is no contradiction here.

No, no, the reasoning was, if we apply the rule mag(f+g)=max(mag(f),mag(f)) we can get into trouble. If 1=max(mag(f),E) and we know E>1 it is a contradiction.

But the reason I studied them is not to simplify calculation, but to explore what kind of structure is created by different growth rates of functions.

Ok thats another thing :) Yes and indeed its a complicated structure. For example you mentioned the method of lexicographic comparison in sums of magnitudes. It is not always true if you admit inverse functions (and I am not sure whether already with exp and ln one can construct a similar example) the following example popped up during my research in the coppice of power iterated functions:

f(x):= x<sup>2</sup>
g := (x -> x<sup>x</sup>) <sup>-1</sup> o (id<sup>id<sup>2</sup></sup>)

What grows then faster, f or g?
Note btw that (x->x<sup>x</sup>)<sup>-1</sup> is expressable with the Lambert W function by ln(x)/W(ln(x)).

First we see that g > id, because
obviously f > id<sup>id</sup> and then
(x->x<sup>x</sup>)<sup>-1</sup> o f > id

if we now would apply the lexicographic principle to f and g, we would conclude that g > id<sup>2</sup> = f.
But contrary is f > g as the following equivalences show:
id<sup>2</sup> > g
id<sup>2</sup> > (x-> x<sup>x</sup>)<sup>-1</sup> o id<sup>id<sup>2</sup></sup>
id<sup>id</sup> o id<sup>2</sup> > f
id<sup>2id<sup>2</sup></sup> > id<sup>id<sup>2</sup></sup>

So g is a function that grows between x and x<sup>2</sup>. With magnitude numbers
2 > (E*(1+L))<sup>-1</sup>*E*(2+L) > 1
2 > (1+L)<sup>-1</sup>*E<sup>-1</sup>*E*(2+L) > 1
2 > (1+L)<sup>-1</sup>*(2+L) > 1

Which is also directly seeable by arithmetics on the magnitude numbers by the equivalences:
(1+L)<sup>-1</sup>*(2+L) > 1
2+L > 1+L
1 > 0

2 > (1+L)<sup>-1</sup>*(2+L)
(1+L)*2 > 2+L
2+L*2 > 2+L
L*2 > L

Of course in O-sense would L*2 ~ L and then 2 ~ (1+L)<sup>-1</sup>*(2+L). Or generally (a+L)<sup>-1</sup>*(b+L) ~ b/a, hm ...

So the magnitude numbers are an algebraic structure with the operations +, *, <sup>-1</sup> and a (linear?) order <, which contains the positive reals as sub structure and at least the additional element E which is greater than all reals.
If we look at the valid rules, we see that it scarcely fails to be a skew field.
f+g = g+f
f+(g+h)=(f+g)+h
1*f = f*1 = f
f*f<sup>-1</sup> = f<sup>-1</sup>*f = 1
f*(g*h)=(f*g)*h
(f+g)*h = f*h + g*h
if f < g then h*f < h*g, f*h < g*h, f+h < g+h


And from the algebraic standpoint I would strongly suggest not to O-identify functions. Otherwise you always have to fiddle with explaining
why not 2*E = mag((e<sup>x</sup>)<sup>2</sup>)=mag(e<sup>2x</sup>) = E * mag(2x) = E * 1 = E which then would also destroy the group structure because otherwise would 2=1.

For example in the above computation, we have
L*2 ~ L
but not
E*L*2 ~ E*L
which would be equivalent to 2 ~ 1.

So we have
(a+L)<sup>-1</sup>*(b+L) ~ b/a
but not
E*(a+L)<sup>-1</sup>*(b+L) ~ E*(b/a) ? Rather E*(b/a) > E*(a+L)<sup>-1</sup>*(b+L) ? (for b>a)

More generally for most two functions f > g in the Hardy order, we can put enough E's in front such that E*...*E*f > E*...*E*g in the O-order.

So I come back to my original example. We realized that
ln(1+exp(x))ln(3+exp(x)) is O-equal to ln(2+exp(x))ln(2+exp(x))
But what if we put some E's in front? Are they then still O-equal? If not, which one is greater?

Or how would you compare E*(L+1)*(L+3) with E*(L+2)*(L+2)?

Or E*(1+L)<sup>-1</sup>*(4+L) with E*(2+L)<sup>-1</sup>*(3+L)?


I admit I know less of this stuff than I should, being a CS graduate. :)

Probably only if you had taken a course in recursion theory.

Yes, I wasn't expecting NP problems to be related to the halting problem. :)


The interesting thing about P!=NP that it looks for me like the question whether there is a cardinality between the naturals and the reals.
I mean we have in complexity theory also a hierarchy and it is especially
P<2<sup>P</sup> (which I discovered while trying to prove P!=NP *lol*) and P <= NP <= 2<sup>P</sup>.
So I am really tempted to say that P!=NP is undecidable!
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Sun Nov 26, 2006 5:26 pm

bo198214 wrote:
quickfur wrote:
And how do you handle subtraction?
Which growth rate has f(x) = x - e<sup>x</sup>?
1 = mag(f + exp) = max(mag(f),mag(exp)) = max(mag(f),E)
It would follow E=1!

Um, the growth rate of x - e<sup>x</sup> is NOT 1 !!! The limit diverges to -infinity, which means that the magnitude is actually E. So there is no contradiction here.

No, no, the reasoning was, if we apply the rule mag(f+g)=max(mag(f),mag(f)) we can get into trouble. If 1=max(mag(f),E) and we know E>1 it is a contradiction.

You are making an incorrect assumption about the maximization of magnitudes. 1=max(mag(f),E) is never true, because E>1. The thing about negative functions like -e<sup>x</sup> is that if you really want to treat positive and negative terms differently, then you need to introduce a new kind of complementation for magnitudes, so that mag(-e<sup>x</sup>) = complement(mag(e<sup>x</sup>). This makes things even more complicated... so I decided to ignore the negative sign and treat both as E. Of course, this may not be the best way of doing things, but I haven't actually touched this stuff for years now. :oops:

But the reason I studied them is not to simplify calculation, but to explore what kind of structure is created by different growth rates of functions.

Ok thats another thing :) Yes and indeed its a complicated structure. For example you mentioned the method of lexicographic comparison in sums of magnitudes. It is not always true if you admit inverse functions (and I am not sure whether already with exp and ln one can construct a similar example) the following example popped up during my research in the coppice of power iterated functions:

f(x):= x<sup>2</sup>
g := (x -> x<sup>x</sup>) <sup>-1</sup> o (id<sup>id<sup>2</sup></sup>)

Inverse functions have the property that mag(f) = 1/mag(f<sup>-1</sup>). At least this is true for polynomial functions and e<sup>x</sup>. I don't remember studying more complex function inverses.

What grows then faster, f or g?

Interesting question. I get mag(g) = (2+L)/(1+L), and of course mag(f) = 2. Since 1>(2+k)/(1+k) for real numbers k, I assume that 2 > (2+L)/(1+L), since L is a positive infinitesimal. So my guess is that f grows faster. Am I right? :)

Note btw that (x->x<sup>x</sup>)<sup>-1</sup> is expressable with the Lambert W function by ln(x)/W(ln(x)). First we see that g > id, because
obviously f > id<sup>id</sup> and then
(x->x<sup>x</sup>)<sup>-1</sup> o f > id

if we now would apply the lexicographic principle to f and g, we would conclude that g > id<sup>2</sup> = f.

Wait, how are you applying the lexicographic principle? When I talked about the lexicographic principle, it is in the context of magnitude expressions of the form a*E*M + b*E*N + c*E*O, etc.. Lexicographic comparison only works in that context.

But contrary is f > g as the following equivalences show:
id<sup>2</sup> > g
id<sup>2</sup> > (x-> x<sup>x</sup>)<sup>-1</sup> o id<sup>id<sup>2</sup></sup>
id<sup>id</sup> o id<sup>2</sup> > f
id<sup>2id<sup>2</sup></sup> > id<sup>id<sup>2</sup></sup>

So g is a function that grows between x and x<sup>2</sup>. With magnitude numbers
2 > (E*(1+L))<sup>-1</sup>*E*(2+L) > 1
2 > (1+L)<sup>-1</sup>*E<sup>-1</sup>*E*(2+L) > 1
2 > (1+L)<sup>-1</sup>*(2+L) > 1

Which is also directly seeable by arithmetics on the magnitude numbers by the equivalences:
(1+L)<sup>-1</sup>*(2+L) > 1
2+L > 1+L
1 > 0

2 > (1+L)<sup>-1</sup>*(2+L)
(1+L)*2 > 2+L
2+L*2 > 2+L
L*2 > L

Of course in O-sense would L*2 ~ L and then 2 ~ (1+L)<sup>-1</sup>*(2+L). Or generally (a+L)<sup>-1</sup>*(b+L) ~ b/a, hm ...

That is correct. In general, L<sup>m</sup> > L<sup>n</sup> if m>n. The set of {L<sup>n</sup>} form the basis of a kind of infinitesimal hierarchy of magnitude numbers, all of which are between 0 and all positive reals.

So the magnitude numbers are an algebraic structure with the operations +, *, <sup>-1</sup> and a (linear?) order <, which contains the positive reals as sub structure and at least the additional element E which is greater than all reals.
If we look at the valid rules, we see that it scarcely fails to be a skew field.

That is true.

Regarding the linearity of <, it is only true if you include polynomials and exponentials/logarithms (or maybe tetration/iterated log, and higher operations). If you include periodic functions, then it is no longer linear. I have tried to preserve linearity for compositions with some periodic functions, but it is messy, and doesn't extend well to cover most of the cases.

f+g = g+f
f+(g+h)=(f+g)+h
1*f = f*1 = f
f*f<sup>-1</sup> = f<sup>-1</sup>*f = 1
f*(g*h)=(f*g)*h
(f+g)*h = f*h + g*h
if f < g then h*f < h*g, f*h < g*h, f+h < g+h


And from the algebraic standpoint I would strongly suggest not to O-identify functions. Otherwise you always have to fiddle with explaining
why not 2*E = mag((e<sup>x</sup>)<sup>2</sup>)=mag(e<sup>2x</sup>) = E * mag(2x) = E * 1 = E which then would also destroy the group structure because otherwise would 2=1.

Well, this is precisely the area where left-multiplication by E is multivalued. More on this below.

For example in the above computation, we have
L*2 ~ L
but not
E*L*2 ~ E*L
which would be equivalent to 2 ~ 1.

Correct. So L*k ~ L, and k is insignificant in this case. But E has a "magnification" property, where insignificant operations get "magnified", so that E*L < E*L*2 even though L = L*2.

It is interesting to note that L has the opposite effect: L = L*2 even though 1<2 (L "demagnifies" 2).

Having said all this, though, I wondered after I derived all of this whether I really should have kept all the terms intact and define an equivalence over them without throwing away the terms, because even for polynomials, left-composing with e<sup>x</sup> causes the non-degree terms to make a difference. In the end, it looks like using the magnitude number notation is really only hiding the details behind symbols without really simplifying anything.

So we have
(a+L)<sup>-1</sup>*(b+L) ~ b/a

This is not true. In general, (b+L)/(a+L) < b/a.

but not
E*(a+L)<sup>-1</sup>*(b+L) ~ E*(b/a) ? Rather E*(b/a) > E*(a+L)<sup>-1</sup>*(b+L) ? (for a < b)

(Note: I switched the sense of comparison above because phpBB is being stupid about dealing with angle brackets.)

In this case it's OK, but you're right that there are cases such as mag(x<sup>2</sup>+x) = mag(x<sup>2</sup>), but mag(e<sup>x<sup>2</sup>+x</sup>) != mag(e<sup>x<sup>2</sup></sup>).

More generally for most two functions g < f in the Hardy order, we can put enough E's in front such that E*...*E*g < E*...*E*f in the O-order.

(Note: switching the sense of comparison again because phpBB is being stupid.)

You are right, I have noticed this also. This was after I had derived most of the algebraic rules. :) In a sense, O-identification is simply disregarding details below a certain threshold, and E has the property that it "magnifies" these details above the threshold. So perhaps using Hardy's criterion is more consistent. :)

Although, now that I looked at this again, with my current scheme, many cases can already be sorted out, and maybe with a better refinement it can generalize to something that can handle Hardy equivalence.

So I come back to my original example. We realized that
ln(1+exp(x))ln(3+exp(x)) is O-equal to ln(2+exp(x))ln(2+exp(x))
But what if we put some E's in front? Are they then still O-equal? If not, which one is greater?

Or how would you compare E*(L+1)*(L+3) with E*(L+2)*(L+2)?

Good question. I'm pretty sure they should still be O-equal, although I may be wrong.

Or E*(1+L)<sup>-1</sup>*(4+L) with E*(2+L)<sup>-1</sup>*(3+L)?

In this case, definitely E*(1+L)<sup>-1</sup>*(4+L) is greater.

I admit I know less of this stuff than I should, being a CS graduate. :)

Probably only if you had taken a course in recursion theory.

Well, computability theory is a required course, so I've definitely seen this stuff before, but I don't remember very much of it.

Yes, I wasn't expecting NP problems to be related to the halting problem. :)


The interesting thing about P!=NP that it looks for me like the question whether there is a cardinality between the naturals and the reals.
I mean we have in complexity theory also a hierarchy and it is especially
P<2<sup>P</sup> (which I discovered while trying to prove P!=NP *lol*) and P <= NP <= 2<sup>P</sup>.
So I am really tempted to say that P!=NP is undecidable!

That's an interesting hypothesis. Maybe that's why nobody has managed to prove it either way yet. :)
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Sun Nov 26, 2006 9:31 pm

quickfur wrote:You are making an incorrect assumption about the maximization of magnitudes.

I merely applied your rule mag(f+g)=max(mag(f),mag(g)). Is it wrong?

so I decided to ignore the negative sign and treat both as E. Of course, this may not be the best way of doing things, but I haven't actually touched this stuff for years now. :oops:

My suggestion would be anyway to not regard addition. If you want addition then simply express it as multiplication f+g=ln(e<sup>f</sup> e<sup>g</sup>). mag(f+g)=L*(E*f+E*g), that should work flawless in any situation. Especially mag(-f)=L*(-E*g) ...

What grows then faster, f or g?

Interesting question. I get mag(g) = (2+L)/(1+L), and of course mag(f) = 2. Since 1>(2+k)/(1+k) for real numbers k, I assume that 2 > (2+L)/(1+L), since L is a positive infinitesimal. So my guess is that f grows faster. Am I right? :)

At least its equal to what I proved some lines below ;). But be careful with (2+L)/(1+L) the correct writing is (1+L)<sup>-1</sup>*(2+L), multiplication is not commutative.

First we see that g > id, because
obviously f > id<sup>id</sup> and then
(x->x<sup>x</sup>)<sup>-1</sup> o f > id

if we now would apply the lexicographic principle to f and g, we would conclude that g > id<sup>2</sup> = f.

Wait, how are you applying the lexicographic principle? When I talked about the lexicographic principle, it is in the context of magnitude expressions of the form a*E*M + b*E*N + c*E*O, etc.. Lexicographic comparison only works in that context.


I compare g and id<sup>2</sup> by
mag(g) ? 1 + 1
E*L*mag(g) ? E*L*1 + E*L*1

and we know that L*mag(g) > L*1 , because mag(g) > 1. By the lexicograpic rule then E*L*mag(g) > E*L*1 + E*L*1.
So the lexicographic rule leads to a contradiction. (Except M,N,O are polynomials, or at least of a certain structure.)

So we have
(a+L)<sup>-1</sup>*(b+L) ~ b/a

This is not true. In general, (b+L)/(a+L) < b/a.

*raises eybrow* We have the following equivalences:

b/a ~ (a+L)<sup>-1</sup>*(b+L)
(a+L)*(b/a) ~ b+L
a*(b/a)+L*(b/a) ~ b+L
b + L*(b/a) ~ b+L which is true because L*(b/a) ~ L

In a sense, O-identification is simply disregarding details below a certain threshold, and E has the property that it "magnifies" these details above the threshold. So perhaps using Hardy's criterion is more consistent. :)

It allows in the first place algebraic rules like the group structure of composition. I was hoping that we can define "<" by algebraic rules (what not even means decidability). And dont know yet whether it is possible. Unfortunately the simplification by omitting insignificant terms, destroys the algebraic structure.

So I come back to my original example. We realized that
ln(1+exp(x))ln(3+exp(x)) is O-equal to ln(2+exp(x))ln(2+exp(x))
But what if we put some E's in front? Are they then still O-equal? If not, which one is greater?

Or how would you compare E*(L+1)*(L+3) with E*(L+2)*(L+2)?

Good question. I'm pretty sure they should still be O-equal, although I may be wrong.

Even the last one? It is not equivalent to the previous. I moulded it after your "more interesting" expression e<sup>ln(ln(x))ln(x)</sup> which has magnitude E*(L*L+L) = E*(L+1)*L.

Look if they are not equal (and they are not equal) I would simply apply E* to the left until they are no more O-equal. So if there is indeed a linear order on that expressions (and Hardy showed that) one of both *must* become greater than the other. The evil thing is just that we have no algebraic clue how to derive it. But saying they are O-equal is just hiding in front of the ostrich *lol*.

In this case I would guess from the real numbers that
(L+2)*(L+2) is bigger (products with equal sum of multiplicants are biggest if the difference of the multiplicants is smallest) but I can not prove it with the established algebraic rules (and I guess it is not provable by that rules). So either we invent some new rules, that would decide that problem, or the algebraic approach is nice but not very far reaching.

Or E*(1+L)<sup>-1</sup>*(4+L) with E*(2+L)<sup>-1</sup>*(3+L)?

In this case, definitely E*(1+L)<sup>-1</sup>*(4+L) is greater.

I mean I agree with you, that if we increase the nominator and decrease the denominator the damn thing should become bigger. So this was my fault *ggg* (giving you too easy exercises :P)

(And I reformulate it as E*(1+L)<sup>-1</sup>*(2+L) > E*(2+L)<sup>-1</sup>*(3+L)? Answer with proof please ;) )
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Mon Nov 27, 2006 12:21 am

bo198214 wrote:
quickfur wrote:You are making an incorrect assumption about the maximization of magnitudes.

I merely applied your rule mag(f+g)=max(mag(f),mag(g)). Is it wrong?

The rule is not wrong, but I think you made a wrong assumption that max(mag(x),mag(-e<sup>x</sup>))=mag(x).

The problem here is that the O-notation is developed for analysing algorithmic complexity, which is always positive-valued. Functions like x-e<sup>x</sup> is not accounted for, so one has to decide, do I want to distinguish between the magnitudes of e<sup>x</sup> and -e<sup>x</sup>, or do I want to regard them as instances of the exponential magnitude K*e<sup>x</sup>? If you choose to distinguish them, then it is not true that mag(f+g) = max(mag(f),mag(g)). Otherwise, the maximization rule works, but then mag(-e<sup>x</sup>) = mag(e<sup>x</sup>) > mag(x).

so I decided to ignore the negative sign and treat both as E. Of course, this may not be the best way of doing things, but I haven't actually touched this stuff for years now. :oops:

My suggestion would be anyway to not regard addition. If you want addition then simply express it as multiplication f+g=ln(e<sup>f</sup> e<sup>g</sup>). mag(f+g)=L*(E*f+E*g), that should work flawless in any situation. Especially mag(-f)=L*(-E*g) ...

Hmm, you are right. Whether or not you consider mag(e<sup>x</sup>) = mag(-e<sup>x</sup>), you have to deal with the case e<sup>-e<sup>x</sup></sup>: is that to be understood as E*E? Obviously, we cannot do that, since that leads to a contradiction. So the O-equivalence analysis breaks down once you include E in the algebra.

What grows then faster, f or g?

Interesting question. I get mag(g) = (2+L)/(1+L), and of course mag(f) = 2. Since 1>(2+k)/(1+k) for real numbers k, I assume that 2 > (2+L)/(1+L), since L is a positive infinitesimal. So my guess is that f grows faster. Am I right? :)

At least its equal to what I proved some lines below ;). But be careful with (2+L)/(1+L) the correct writing is (1+L)<sup>-1</sup>*(2+L), multiplication is not commutative.

Hehe, you're right. I think I myself am careless in this regard. Noncommutative multiplication has the unfortunate side effect that there are also two distinct divisions.

First we see that g > id, because
obviously f > id<sup>id</sup> and then
(x->x<sup>x</sup>)<sup>-1</sup> o f > id

if we now would apply the lexicographic principle to f and g, we would conclude that g > id<sup>2</sup> = f.

Wait, how are you applying the lexicographic principle? When I talked about the lexicographic principle, it is in the context of magnitude expressions of the form a*E*M + b*E*N + c*E*O, etc.. Lexicographic comparison only works in that context.


I compare g and id<sup>2</sup> by
mag(g) ? 1 + 1
E*L*mag(g) ? E*L*1 + E*L*1

and we know that L*mag(g) > L*1 , because mag(g) > 1. By the lexicograpic rule then E*L*mag(g) > E*L*1 + E*L*1.

Wait, you are not applying the comparison correctly. You have L*mag(g) > L*1, and therefore E*L*mag(g) > E*L*1. But this says nothing about E*L*1 + E*L*1. Before applying lexicographic comparison, obviously you have to collapse E*L*1 + E*L*1 = 2*E*L*1, in which case there is no contradiction.

[...]
So we have
(a+L)<sup>-1</sup>*(b+L) ~ b/a

This is not true. In general, (b+L)/(a+L) < b/a.

*raises eybrow* We have the following equivalences:

b/a ~ (a+L)<sup>-1</sup>*(b+L)
(a+L)*(b/a) ~ b+L
a*(b/a)+L*(b/a) ~ b+L
b + L*(b/a) ~ b+L which is true because L*(b/a) ~ L

Ahhh, I see it now. The derivation from (a+L)*(b/a) ~ b+L to b/a ~ (a+L)<sup>-1</sup>*(b+L) is the problematic step. The intermediate step is (a+L)<sup>-1</sup>*(a+L)*(b/a) ~ (a+L)<sup>-1</sup>*(b+L). I am not sure about the behaviour of inverses involving exponential/logarithmic terms, and it appears that in this case that is what leads to a contradiction. For example, even though L*k ~ L, we cannot say E*L*k = E*L, because e<sup>ln x<sup>3</sup></sup> = x<sup>3</sup> o e<sup>ln x</sup> = x<sup>3</sup>. The "lensing" effect L*k ~ L seems to be an artifact of O-identification, which breaks down when you apply E to it (or analogously, the inverse of a magnitude involving L).

In a sense, O-identification is simply disregarding details below a certain threshold, and E has the property that it "magnifies" these details above the threshold. So perhaps using Hardy's criterion is more consistent. :)

It allows in the first place algebraic rules like the group structure of composition. I was hoping that we can define "<" by algebraic rules (what not even means decidability). And dont know yet whether it is possible. Unfortunately the simplification by omitting insignificant terms, destroys the algebraic structure.

You are right, disregarding insignificant terms destroys the algebraic structure.

All of this discussion gives me a new perspective on a previous idea I already had: I had realized that using O-identification often times causes trouble because an infinite magnitude like E can cause omitted terms to become significant. Also, I noticed that the lexicographic comparisons I was talking about behaves a lot like the polynomials: there is a "degree" and a most-significant term, and lesser terms which are usually safe to omit. So it seems that we are looking at a polynomial-like structure at the exponential level. And, as you have noted, you can always add E's to the expression until the O ordering becomes equivalent to the Hardy comparison. So, it seems that there are different levels of comparison: the O-comparison gives you a crude ordering where you distinguish between polynomial degrees, E, and L, but if O-comparison says O(f)=O(g), then you have to refine your comparison and compare O(e<sup>f</sup>) and O(e<sup>g</sup>). This is essentially equivalent to comparing the original functions term-by-term (since left-multiplication by E causes + to have the significance of *, and * to have the significance of composition, etc.). This can be taken indefinitely until the functions are either equal or not. To use an example that we have discussed: mag(ln x<sup>k</sup>) = mag(k ln x) = L ~ mag(ln x), but if you exponentiate either side, the distinction between them becomes clear: mag(e<sup>ln x<sup>k</sup></sup>) = mag(e<sup>k ln x</sup>) = mag(x<sup>k</sup>)*mag(e<sup>ln x</sup>) = k, but mag(e<sup>ln x</sup>) = 1, so ln x<sup>k</sup> should be regarded as greater than ln x. Similarly, ln(x+1) can seen to be greater than ln x even though they are O-equal.

In other words, O-comparison is at a coarser level of granularity, where you see polynomials as reals and exponentials as infinite numbers and logarithms as infinitesimals. But to get a true answer, you have to apply finer levels of comparison. Left-composition with e<sup>x</sup> works if you only go as far as exponentials and logarithms; but once you get into tetrations and iterated logs, this is not enough to decide equivalence (e.g., consider the difference between ilog(x+1) and ilog(x): they will be O-equal for all finite exponents).

In the end, it seems that O-equivalence is a good way to quickly compare two functions that are sufficiently different that they O-comparison will tell you which one is greater. But for functions that are very close, you will need another method to decide between them.

So I come back to my original example. We realized that
ln(1+exp(x))ln(3+exp(x)) is O-equal to ln(2+exp(x))ln(2+exp(x))
But what if we put some E's in front? Are they then still O-equal? If not, which one is greater?

Or how would you compare E*(L+1)*(L+3) with E*(L+2)*(L+2)?

Good question. I'm pretty sure they should still be O-equal, although I may be wrong.

Even the last one? It is not equivalent to the previous. I moulded it after your "more interesting" expression e<sup>ln(ln(x))ln(x)</sup> which has magnitude E*(L*L+L) = E*(L+1)*L.

Look if they are not equal (and they are not equal) I would simply apply E* to the left until they are no more O-equal. So if there is indeed a linear order on that expressions (and Hardy showed that) one of both *must* become greater than the other. The evil thing is just that we have no algebraic clue how to derive it. But saying they are O-equal is just hiding in front of the ostrich *lol*.

You are right, so if they are not O-equal, then we can decide between them easily. But if they are O-equal, then we need to use a finer comparison to distinguish between them.

In this case I would guess from the real numbers that
(L+2)*(L+2) is bigger (products with equal sum of multiplicants are biggest if the difference of the multiplicants is smallest) but I can not prove it with the established algebraic rules (and I guess it is not provable by that rules). So either we invent some new rules, that would decide that problem, or the algebraic approach is nice but not very far reaching.

Hmm. I guess the O-comparison at least gives you the ballpark of where a function lies, but it doesn't distinguish between two functions in the same ballpark. :\

Or E*(1+L)<sup>-1</sup>*(4+L) with E*(2+L)<sup>-1</sup>*(3+L)?

In this case, definitely E*(1+L)<sup>-1</sup>*(4+L) is greater.

I mean I agree with you, that if we increase the nominator and decrease the denominator the damn thing should become bigger. So this was my fault *ggg* (giving you too easy exercises :P)

(And I reformulate it as E*(1+L)<sup>-1</sup>*(2+L) > E*(2+L)<sup>-1</sup>*(3+L)? Answer with proof please ;) )

Hehe... you realize that, in real numbers, (2+k)/(1+k) > (3+k)/(2+k) when k is small enough, right? So if k is infinitesimal like L, my guess is that E*(1+L)<sup>-1</sup>*(2+L) is greater. I think I'm right. So, even though O-comparison has problems (as we see in the above discussion), you can still get a pretty accurate picture of the real answers in many cases.

Now, a much harder question would be, which one is greater: mag(f)=E*(1+L)<sup>-1</sup>*(2+L), or mag(g)=E*(2+L)*(1+L)<sup>-1</sup>? My guess is that f is greater, but I have no way to prove this.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Mon Nov 27, 2006 1:18 pm

quickfur wrote:
bo198214 wrote:I merely applied your rule mag(f+g)=max(mag(f),mag(g)). Is it wrong?

The rule is not wrong, but I think you made a wrong assumption that max(mag(x),mag(-e<sup>x</sup>))=mag(x).

Look at the original posting:
f(x):= x - e<sup>x</sup> hence f+exp = id, then
1 = mag(f + exp) = max(mag(f),mag(exp)) = max(mag(f),E)
I merely applied your addition rule in the middle and it then leads to a contradiction, no additional assumptions are made.

The problem here is that the O-notation is developed for analysing algorithmic complexity, which is always positive-valued.

And I would propose we indeed only regard (finally) strictly increasing functions, which is automatically given if we dont allow subtraction and division. But inverses of functions are allowed (again strictly increasing) and are then always possible.

Whether or not you consider mag(e<sup>x</sup>) = mag(-e<sup>x</sup>), you have to deal with the case e<sup>-e<sup>x</sup></sup>: is that to be understood as E*E? Obviously, we cannot do that, since that leads to a contradiction. So the O-equivalence analysis breaks down once you include E in the algebra.

That would be circumvented with the previously said (no division).

Noncommutative multiplication has the unfortunate side effect that there are also two distinct divisions.

Oh there is nothing mysterious about non-commutative multiplication. It is just a group structure. Instead of speaking of division the natural way is speaking of inverses, and there is only one inversion function in a group (and x<sup>-1</sup>x=xx<sup>-1</sup>=1).
If we additionally would drop the requirement of associativity but still want to be able of kind of inversion, then we got a quasigroup or loop (with identity). *There* one have to distinguish between left and right division.

Wait, you are not applying the comparison correctly. You have L*mag(g) > L*1, and therefore E*L*mag(g) > E*L*1. But this says nothing about E*L*1 + E*L*1. Before applying lexicographic comparison, obviously you have to collapse E*L*1 + E*L*1 = 2*E*L*1, in which case there is no contradiction.

Wasnt the your lexicographic rule that if we compare
E*M<sub>1</sub> + ... + E*M<sub>m</sub> ? E*N<sub>1</sub> + ... + E*N<sub>n</sub>
we simply need to compare the greatest M<sub>k</sub> with the greatest N<sub>l</sub>? And if one is greater then the whole corresponding expression of that side is greater? (What indeed is true if M<sub>i</sub>,N<sub>i</sub> are the magnitudes of polynomials, i.e. reals).

Ahhh, I see it now. The derivation from (a+L)*(b/a) ~ b+L to b/a ~ (a+L)<sup>-1</sup>*(b+L) is the problematic step.

Ok, I see now. But if we have that (at least with Hardy order)
2 > (1+L)<sup>-1</sup>*(2+L)
(1+L)*2 > 2+L
2+L*2 > 2+L
2 + L*2 > 2 +L
and we also have that
(1+L)<sup>-1</sup>*(2+L) > 1

Then I am a bit puzzled at which point r, is (1+L)<sup>-1</sup>*(2+L) ~ r?

Is there a flaw in my thinking?: If the order is linear dense and a magnitude M lies between two reals, then M ~ r for some real number. I mean each real number has to be comparable with M, so we can consider all reals G that are greater than M and all reals L that are lower than M. Then must the infimum of G be equal to the supremum of L and this number r is then M ~ r.

There must even be a sequence of more and more fine measures r<sub>n</sub> if we n times prefixing by E*:
E<sup>n</sup>*(1+L)<sup>-1</sup>*(2+L) ~ E<sup>n</sup>*r<sub>n</sub>
because the magnitudes of the form E<sup>n</sup>*r (r arbitrary, n fixed) are also linear and dense and
E<sup>n</sup>*2 > E<sup>n</sup>*(1+L)<sup>-1</sup>*(2+L) > E<sup>n</sup>*1

So what is this mysterious number r for (1+L)<sup>-1</sup>*(2+L) or more generally for (a+L)<sup>-1</sup>*(b+L), with b>a. And how looks the sequence r<sub>n</sub>?
I think (if i made no error) this is a very challenging question. I am already excited :D

But for functions that are very close, you will need another method to decide between them.

The Hardy order does this. (At least if we only regard expression built by +,-,*,/,exp,ln) if of two functions neither is greater than the other then they are finally equal (so its kind of finest order possible and allows the algebraic structure).


(And I reformulate it as E*(1+L)<sup>-1</sup>*(2+L) > E*(2+L)<sup>-1</sup>*(3+L)? Answer with proof please ;) )

Hehe... you realize that, in real numbers, (2+k)/(1+k) > (3+k)/(2+k) when k is small enough, right? So if k is infinitesimal like L, my guess is that E*(1+L)<sup>-1</sup>*(2+L) is greater.

That would be my guess too. The question for me is what additional algebraic rules do we need that we can infer it by computation. Also in the next question.

Now, a much harder question would be, which one is greater: mag(f)=E*(1+L)<sup>-1</sup>*(2+L), or mag(g)=E*(2+L)*(1+L)<sup>-1</sup>? My guess is that f is greater, but I have no way to prove this.

Indeed a very interesting question. I will ask maple about it, and make then another post.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Mon Nov 27, 2006 4:32 pm

bo198214 wrote:
quickfur wrote:
bo198214 wrote:I merely applied your rule mag(f+g)=max(mag(f),mag(g)). Is it wrong?

The rule is not wrong, but I think you made a wrong assumption that max(mag(x),mag(-e<sup>x</sup>))=mag(x).

Look at the original posting:
f(x):= x - e<sup>x</sup> hence f+exp = id, then
1 = mag(f + exp) = max(mag(f),mag(exp)) = max(mag(f),E)
I merely applied your addition rule in the middle and it then leads to a contradiction, no additional assumptions are made.

The maximization is not the maximization of the function, but the maximization of the magnitude. I consider mag(e<sup>x</sup>) to be equal to mag(-e<sup>x</sup>), so max(mag(x),mag(-e<sup>x</sup>)) = mag(-e<sup>x</sup>). There is no contradiction here.

The problem here is that the O-notation is developed for analysing algorithmic complexity, which is always positive-valued.

And I would propose we indeed only regard (finally) strictly increasing functions, which is automatically given if we dont allow subtraction and division. But inverses of functions are allowed (again strictly increasing) and are then always possible.

Actually, functions that converge to 0 are also worth studying. Negative magnitudes are those that converge to 0. For example, a magnitude of -1 corresponds with 1/x, and a magnitude of -2 corresponds with 1/x<sup>2</sup>, which collapses to 0 faster than 1/x. So you get the dual kind of structure, where if two magnitudes M and N are negative, then the more negative magnitude is the one that converges to 0 faster. Addition of magnitudes is consistent between positive and negative magnitudes.

Note that negative magnitudes correspond with reciprocation, so to deal with negative functions, you have to either consider them equal to their positive counterparts (like I do), or else you introduce a "hypernegativity" to the magnitude numbers, a new kind of complementation, where complement(M) is less than N for all non-complements N (including negative magnitudes).

Whether or not you consider mag(e<sup>x</sup>) = mag(-e<sup>x</sup>), you have to deal with the case e<sup>-e<sup>x</sup></sup>: is that to be understood as E*E? Obviously, we cannot do that, since that leads to a contradiction. So the O-equivalence analysis breaks down once you include E in the algebra.

That would be circumvented with the previously said (no division).[/quote]
I don't see a problem with division, really. They are just negative magnitudes.

[...]
Wait, you are not applying the comparison correctly. You have L*mag(g) > L*1, and therefore E*L*mag(g) > E*L*1. But this says nothing about E*L*1 + E*L*1. Before applying lexicographic comparison, obviously you have to collapse E*L*1 + E*L*1 = 2*E*L*1, in which case there is no contradiction.

Wasnt the your lexicographic rule that if we compare
E*M<sub>1</sub> + ... + E*M<sub>m</sub> ? E*N<sub>1</sub> + ... + E*N<sub>n</sub>
we simply need to compare the greatest M<sub>k</sub> with the greatest N<sub>l</sub>? And if one is greater then the whole corresponding expression of that side is greater? (What indeed is true if M<sub>i</sub>,N<sub>i</sub> are the magnitudes of polynomials, i.e. reals).

It is still true, but the comparison is invalid if you arbitrarily halve it! If you get E*L*1 + E*L*1, you must consider it as 2*E*L*1, otherwise how can you consistently do a lexicographic comparison?? This breaks on all lexicographic comparisons, not just mine!

Ahhh, I see it now. The derivation from (a+L)*(b/a) ~ b+L to b/a ~ (a+L)<sup>-1</sup>*(b+L) is the problematic step.

Ok, I see now. But if we have that (at least with Hardy order)
2 > (1+L)<sup>-1</sup>*(2+L)
(1+L)*2 > 2+L
2+L*2 > 2+L
2 + L*2 > 2 +L
and we also have that
(1+L)<sup>-1</sup>*(2+L) > 1

Then I am a bit puzzled at which point r, is (1+L)<sup>-1</sup>*(2+L) ~ r?

Is there a flaw in my thinking?: If the order is linear dense and a magnitude M lies between two reals, then M ~ r for some real number.

This is not true for the magnitude numbers. Otherwise, you could say that O(x ln x)=O(x), which is wrong.

I mean each real number has to be comparable with M, so we can consider all reals G that are greater than M and all reals L that are lower than M. Then must the infimum of G be equal to the supremum of L and this number r is then M ~ r.

And how do you define infimum/supremum over the set of magnitudes? If you use the real number definition, then r != M, because M is a non-real magnitude number. If you define infimum/supremum over the magnitude numbers, then inf(G) and sup(L) are not well-defined, because there is an infinitude of magnitudes between G and L.

There must even be a sequence of more and more fine measures r<sub>n</sub> if we n times prefixing by E*:
E<sup>n</sup>*(1+L)<sup>-1</sup>*(2+L) ~ E<sup>n</sup>*r<sub>n</sub>
because the magnitudes of the form E<sup>n</sup>*r (r arbitrary, n fixed) are also linear and dense and
E<sup>n</sup>*2 > E<sup>n</sup>*(1+L)<sup>-1</sup>*(2+L) > E<sup>n</sup>*1

So what is this mysterious number r for (1+L)<sup>-1</sup>*(2+L) or more generally for (a+L)<sup>-1</sup>*(b+L), with b>a. And how looks the sequence r<sub>n</sub>?
I think (if i made no error) this is a very challenging question. I am already excited :D

Wait, are you trying to find a real number for r??? That's impossible, because you are dealing with magnitude numbers, not real numbers. There is no guarantee that r is real. And for a magnitude number r, you already have the expression that gives r<sub>n</sub>: (1+L)<sup>-1</sup>*(2+L). I don't see what's the problem here.

But for functions that are very close, you will need another method to decide between them.

The Hardy order does this. (At least if we only regard expression built by +,-,*,/,exp,ln) if of two functions neither is greater than the other then they are finally equal (so its kind of finest order possible and allows the algebraic structure).

Yes, but we are trying to find an algebraic method to do the Hardy comparison, no?

(And I reformulate it as E*(1+L)<sup>-1</sup>*(2+L) > E*(2+L)<sup>-1</sup>*(3+L)? Answer with proof please ;) )

Hehe... you realize that, in real numbers, (2+k)/(1+k) > (3+k)/(2+k) when k is small enough, right? So if k is infinitesimal like L, my guess is that E*(1+L)<sup>-1</sup>*(2+L) is greater.

That would be my guess too. The question for me is what additional algebraic rules do we need that we can infer it by computation. Also in the next question.

If you know that r > L > 0 for all positive real r, then it is relatively easy to derive this rule: (2 + 0.0001) / (1 + 0.0001) > (3 + 0.0001) / (2 + 0.0001) --- you can verify this with a calculator, but since 0.0001 > L, you have (2+L)/(1+L) > (3+L)/(2+L).
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Mon Nov 27, 2006 8:26 pm

quickfur wrote:
bo198214 wrote:
quickfur wrote:
bo198214 wrote:I merely applied your rule mag(f+g)=max(mag(f),mag(g)). Is it wrong?

The rule is not wrong, but I think you made a wrong assumption that max(mag(x),mag(-e<sup>x</sup>))=mag(x).

Look at the original posting:
f(x):= x - e<sup>x</sup> hence f+exp = id, then
1 = mag(f + exp) = max(mag(f),mag(exp)) = max(mag(f),E)
I merely applied your addition rule in the middle and it then leads to a contradiction, no additional assumptions are made.

The maximization is not the maximization of the function, but the maximization of the magnitude. I consider mag(e<sup>x</sup>) to be equal to mag(-e<sup>x</sup>), so max(mag(x),mag(-e<sup>x</sup>)) = mag(-e<sup>x</sup>). There is no contradiction here.

What is not clear about my derivation? I never regarded mag(-e<sup>x</sup>), nor did I apply a maximum on functions.

I don't see a problem with division, really. They are just negative magnitudes.

Oh it was just for my convenience to make the multiplication of magnitude numbers a group. If we allow non-increasing functions and multiplications of them, we can come into trouble when inverting them. The most prominent example would be the x<sup>1/x</sup> which is not bijective. On the other hand it seems that every function (with mag(f)!=0) is finally increasing or decreasing so one should be able to apply the (final) inverse on them.

It is still true, but the comparison is invalid if you arbitrarily halve it! If you get E*L*1 + E*L*1, you must consider it as 2*E*L*1, otherwise how can you consistently do a lexicographic comparison?? This breaks on all lexicographic comparisons, not just mine!

Ok, I dont want to argue with you what you originally meant.
Consider it simply as an example that here the comparison can not be made as with polynomials. For polynomials you have x<sup>n+eps</sup> > k x<sup>n</sup>, regardless how big k is. That means if the leading exponents differ in two polynomials then the polynomial with the bigger leading exponent is bigger than the other, regardless what coefficients the powers have. For O-comparison simply put an E* in front.

This is not true for the magnitude numbers. Otherwise, you could say that O(x ln x)=O(x), which is wrong.

Ok, thats an argument. I just considered a different equivalence relation on the functions. I.e. two functions are considered equivalent if they have the same greater and lower reals. In this sense then would 1+L ~ 1.
And in this sense then would (L+a)<sup>-1</sup>(L+b) ~ b/a.
If we Hardy-compare (L+a)<sup>-1</sup>(L+b) and r then we see that for r=b/a is (L+a)<sup>-1</sup>(L+b) > r and for every r<b/a is already (L+a)<sup>-1</sup>(L+b) < r (as I just numerically verified). So b/a has the same upper and the same lower reals as (L+a)<sup>-1</sup>(L+b).

And how do you define infimum/supremum over the set of magnitudes?

I only used it on the subset (of the magnitude numbers) being the reals, and there we can always take supremum and infimum.
If you use the real number definition, then r != M, because M is a non-real magnitude number.

Yes, I know. But the upper and the lower reals and hence their infimum and supremum are equal (of r and M). Its just another equivalence relation for magnitudes near reals. And I guess its even compatible with the operations, i.e. if we have F ~ G then also H*F ~ H*G and F*H ~ G*H.

Yes, but we are trying to find an algebraic method to do the Hardy comparison, no?

Yes?

(And I reformulate it as E*(1+L)<sup>-1</sup>*(2+L) > E*(2+L)<sup>-1</sup>*(3+L)? Answer with proof please ;) )

Hehe... you realize that, in real numbers, (2+k)/(1+k) > (3+k)/(2+k) when k is small enough, right? So if k is infinitesimal like L, my guess is that E*(1+L)<sup>-1</sup>*(2+L) is greater.

That would be my guess too. The question for me is what additional algebraic rules do we need that we can infer it by computation. Also in the next question.

If you know that r > L > 0 for all positive real r, then it is relatively easy to derive this rule: (2 + 0.0001) / (1 + 0.0001) > (3 + 0.0001) / (2 + 0.0001) --- you can verify this with a calculator, but since 0.0001 > L, you have (2+L)/(1+L) > (3+L)/(2+L).

Yes, you described this already above. But its no algebraic rule for the magnitude numbers, you compute on the reals; there you can derive it because of commutativity of the multiplication:
(2+eps)/(1+eps) > (3+eps)/(2+eps)
(2+eps)*(2+eps) > (3+eps)*(1+eps)
4+4*eps+eps<sup>2</sup> > 3+ 4*eps + eps<sup>2</sup>
4 > 3

But on the magnitude numbers we have no commutativity. And hence can not derive it that way. And thatswhy I wondering whether we can introduce additional rules, so that we can derive it algebraicly.

What regards the comparison (1+L)<sup>-1</sup>*(2+L) with (2+L)*(1+L)<sup>-1</sup>, we have the equivalences
(1+L)<sup>-1</sup>*(2+L) > (2+L)*(1+L)<sup>-1</sup>
(2+L)*(1+L) > (1+L)*(2+L)
ln(xln(x))(xln(x))<sup>2</sup> > x<sup>2</sup>ln(x) * ln(x<sup>2</sup>ln(x))
ln(xln(x)) ln(x) > ln(x<sup>2</sup>ln(x))
ln(x)ln(x)+ln(ln(x))ln(x) > 2ln(x)+ln(ln(x))
ln(ln(x))(ln(x)-1) > ln(x)(2-ln(x))
And the right side is finally smaller than 0 and the left side finally greater.
So your guess was right.

For a general comparison of A*B with B*A, I have not found a result.
At least it does not depend on A < B or B < A. For example consider two polynomials
p(x):=x<sup>2</sup>, q(x):=x+1, then p>q
p(q(x))=(x+1)<sup>2</sup> = x<sup>2</sup>+ 2x+1
q(p(x))=x<sup>2</sup>+1
so p*q > q*p (as Hardy, or put E* in front).

But if we take p(x):=x<sup>2</sup>+1, q(x):=0.5x, then also p>q but
p(q(x))=0.25x<sup>2</sup>+1
q(p(x))=0.5x<sup>2</sup>+0.5
so q*p > p*q

It seems that it depends on the "breadth" ....

If on the other hand we consider a polynomial P and and E then it is clear that E*P > P*E.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Mon Nov 27, 2006 10:15 pm

bo198214 wrote:
quickfur wrote:
bo198214 wrote:
quickfur wrote:
bo198214 wrote:I merely applied your rule mag(f+g)=max(mag(f),mag(g)). Is it wrong?

The rule is not wrong, but I think you made a wrong assumption that max(mag(x),mag(-e<sup>x</sup>))=mag(x).

Look at the original posting:
f(x):= x - e<sup>x</sup> hence f+exp = id, then
1 = mag(f + exp) = max(mag(f),mag(exp)) = max(mag(f),E)
I merely applied your addition rule in the middle and it then leads to a contradiction, no additional assumptions are made.

The maximization is not the maximization of the function, but the maximization of the magnitude. I consider mag(e<sup>x</sup>) to be equal to mag(-e<sup>x</sup>), so max(mag(x),mag(-e<sup>x</sup>)) = mag(-e<sup>x</sup>). There is no contradiction here.

What is not clear about my derivation? I never regarded mag(-e<sup>x</sup>), nor did I apply a maximum on functions.

Oh wait, I think I get what you're trying to say now. :oops: Essentially you're saying that f(x) + (-f(x)) = 0, but max(mag(f),mag(-f)) = mag(f), which is non-zero. You are right, if the significant terms cancel out, then the analysis breaks down. I'm not sure how to solve this. This doesn't have to involve exponentials at all; two polynomials of degree d can be added together but the result has degree != d. For example: (x+1) + (-x+1) = 2, which is not of degree 1. So the rule that deg(p+q) = max(deg(p,q)) for polynomials doesn't work when the significant terms of p and q are complementary.

I don't see a problem with division, really. They are just negative magnitudes.

Oh it was just for my convenience to make the multiplication of magnitude numbers a group. If we allow non-increasing functions and multiplications of them, we can come into trouble when inverting them. The most prominent example would be the x<sup>1/x</sup> which is not bijective. On the other hand it seems that every function (with mag(f)!=0) is finally increasing or decreasing so one should be able to apply the (final) inverse on them.

True. I'm not sure how to solve the x<sup>1/x</sup> problem... it seems too limited if you also exclude valid functions such as x/ln x, which has magnitude (1-L).

It is still true, but the comparison is invalid if you arbitrarily halve it! If you get E*L*1 + E*L*1, you must consider it as 2*E*L*1, otherwise how can you consistently do a lexicographic comparison?? This breaks on all lexicographic comparisons, not just mine!

Ok, I dont want to argue with you what you originally meant.
Consider it simply as an example that here the comparison can not be made as with polynomials. For polynomials you have x<sup>n+eps</sup> > k x<sup>n</sup>, regardless how big k is. That means if the leading exponents differ in two polynomials then the polynomial with the bigger leading exponent is bigger than the other, regardless what coefficients the powers have. For O-comparison simply put an E* in front.

The "coefficient" must be real. The original derivation is e<sup>kx<sup>r</sup></sup> = x<sup>k</sup> o e<sup>x<sup>r</sup></sup> -> k*E*r. The k here is real. If you factor it differently, you will definitely get contradictions: e<sup>kx<sup>r</sup></sup> = e<sup>x</sup> o (kx<sup>r</sup>) -> E*r != k*E*r. This is exactly the kind of problem the factorization is supposed to work around.

Admittedly, this is a kludge... because O-equivalence is not fine enough to handle these cases, as we've seen. This is just a kludge to go as far as we can with O-equivalence.

This is not true for the magnitude numbers. Otherwise, you could say that O(x ln x)=O(x), which is wrong.

Ok, thats an argument. I just considered a different equivalence relation on the functions. I.e. two functions are considered equivalent if they have the same greater and lower reals. In this sense then would 1+L ~ 1.

OK, if you use a different equivalence relation, then no wonder the magnitude numbers don't work out. :)

And in this sense then would (L+a)<sup>-1</sup>(L+b) ~ b/a.
If we Hardy-compare (L+a)<sup>-1</sup>(L+b) and r then we see that for r=b/a is (L+a)<sup>-1</sup>(L+b) > r and for every r<b/a is already (L+a)<sup>-1</sup>(L+b) < r (as I just numerically verified). So b/a has the same upper and the same lower reals as (L+a)<sup>-1</sup>(L+b).

Right, because when you work with real numbers, the infinitesimals vanish and it becomes equal to b/a.

And how do you define infimum/supremum over the set of magnitudes?

I only used it on the subset (of the magnitude numbers) being the reals, and there we can always take supremum and infimum.
If you use the real number definition, then r != M, because M is a non-real magnitude number.

Yes, I know. But the upper and the lower reals and hence their infimum and supremum are equal (of r and M). Its just another equivalence relation for magnitudes near reals. And I guess its even compatible with the operations, i.e. if we have F ~ G then also H*F ~ H*G and F*H ~ G*H.

Hmm. This may not always work: say f=ln x and g=ln ln x. Then mag(f) = L, and mag(g)=L*L. In both cases, the closest real for both magnitudes is 0. But e<sup>x</sup> o f = x and e<sup>x</sup> o g = ln x, and you get magnitudes 1 and L, with the closest reals being 1 and 0.

[...]
(And I reformulate it as E*(1+L)<sup>-1</sup>*(2+L) > E*(2+L)<sup>-1</sup>*(3+L)? Answer with proof please ;) )

Hehe... you realize that, in real numbers, (2+k)/(1+k) > (3+k)/(2+k) when k is small enough, right? So if k is infinitesimal like L, my guess is that E*(1+L)<sup>-1</sup>*(2+L) is greater.

That would be my guess too. The question for me is what additional algebraic rules do we need that we can infer it by computation. Also in the next question.

If you know that r > L > 0 for all positive real r, then it is relatively easy to derive this rule: (2 + 0.0001) / (1 + 0.0001) > (3 + 0.0001) / (2 + 0.0001) --- you can verify this with a calculator, but since 0.0001 > L, you have (2+L)/(1+L) > (3+L)/(2+L).

Yes, you described this already above. But its no algebraic rule for the magnitude numbers, you compute on the reals; there you can derive it because of commutativity of the multiplication:
(2+eps)/(1+eps) > (3+eps)/(2+eps)
(2+eps)*(2+eps) > (3+eps)*(1+eps)
4+4*eps+eps<sup>2</sup> > 3+ 4*eps + eps<sup>2</sup>
4 > 3

But on the magnitude numbers we have no commutativity. And hence can not derive it that way. And thatswhy I wondering whether we can introduce additional rules, so that we can derive it algebraicly.

Ah, I see. Is that why you were trying to compute the closest real to a magnitude?

What regards the comparison (1+L)<sup>-1</sup>*(2+L) with (2+L)*(1+L)<sup>-1</sup>, we have the equivalences
(1+L)<sup>-1</sup>*(2+L) > (2+L)*(1+L)<sup>-1</sup>
(2+L)*(1+L) > (1+L)*(2+L)
ln(xln(x))(xln(x))<sup>2</sup> > x<sup>2</sup>ln(x) * ln(x<sup>2</sup>ln(x))
ln(xln(x)) ln(x) > ln(x<sup>2</sup>ln(x))
ln(x)ln(x)+ln(ln(x))ln(x) > 2ln(x)+ln(ln(x))
ln(ln(x))(ln(x)-1) > ln(x)(2-ln(x))
And the right side is finally smaller than 0 and the left side finally greater.
So your guess was right.

Hmm. Now if I consider mag(ln(x)) = mag(-ln(x)), then my guess was wrong.

For a general comparison of A*B with B*A, I have not found a result.
At least it does not depend on A < B or B < A. For example consider two polynomials
p(x):=x<sup>2</sup>, q(x):=x+1, then p>q
p(q(x))=(x+1)<sup>2</sup> = x<sup>2</sup>+ 2x+1
q(p(x))=x<sup>2</sup>+1
so p*q > q*p (as Hardy, or put E* in front).

But if we take p(x):=x<sup>2</sup>+1, q(x):=0.5x, then also p>q but
p(q(x))=0.25x<sup>2</sup>+1
q(p(x))=0.5x<sup>2</sup>+0.5
so q*p > p*q

It seems that it depends on the "breadth" ....

"Breadth"?

If on the other hand we consider a polynomial P and and E then it is clear that E*P > P*E.

In general, M*N > N*M if M is super-polynomial, and N*M > M*N if M is logarithmic. I'm not sure what the behaviour is when M is polynomial.

It is also interesting to note the behaviour of derivatives: for polynomials, mag(f) - mag(f') = 1, but if f is super-polynomial, then 1 > mag(f) - mag(f') (and the difference is 0 when f=e<sup>x</sup>). When f is sub-polynomial, mag(f) - mag(f') > 1.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: tetration and higher operations

Postby bo198214 » Tue Nov 28, 2006 11:17 am

quickfur wrote:
It is still true, but the comparison is invalid if you arbitrarily halve it! If you get E*L*1 + E*L*1, you must consider it as 2*E*L*1, otherwise how can you consistently do a lexicographic comparison?? This breaks on all lexicographic comparisons, not just mine!

Ok, I dont want to argue with you what you originally meant.
Consider it simply as an example that here the comparison can not be made as with polynomials. For polynomials you have x<sup>n+eps</sup> > k x<sup>n</sup>, regardless how big k is. That means if the leading exponents differ in two polynomials then the polynomial with the bigger leading exponent is bigger than the other, regardless what coefficients the powers have. For O-comparison simply put an E* in front.

The "coefficient" must be real. The original derivation is e<sup>kx<sup>r</sup></sup> = x<sup>k</sup> o e<sup>x<sup>r</sup></sup> -> k*E*r. The k here is real.

Yes and for s,k reals and arbitrary real k is
E*s > k*E*r for s>r, but it can become invalid that:
E*S > k*E*R for S>R if S or R are not reals.
That demonstrated my example.

OK, if you use a different equivalence relation, then no wonder the magnitude numbers don't work out. :)

In a first attempt I thought they would be equal.

And I guess its even compatible with the operations, i.e. if we have F ~ G then also H*F ~ H*G and F*H ~ G*H.

Hmm. This may not always work: say f=ln x and g=ln ln x. Then mag(f) = L, and mag(g)=L*L. In both cases, the closest real for both magnitudes is 0. But e<sup>x</sup> o f = x and e<sup>x</sup> o g = ln x, and you get magnitudes 1 and L, with the closest reals being 1 and 0.

right

Is that why you were trying to compute the closest real to a magnitude?

Not exactly. And seems also not very fruitful. Also not fruitful seems my approach to develop into an L series.
If two numbers at the level of reals are equal, for example 1+L ~ 1.
Then one would simply subtract the real part and look how it looks at the first L level, if they are there again equal one again subtracts and looks at the L<sup>2</sup> level and so on we get an L-series:
r<sub>0</sub> + r<sub>1</sub>*L + r<sub>2</sub>*L<sup>2</sup> + ....
Its a bit like in asymptotics theory where you can develop for a given sequence of gauge functions f<sub>n</sub>, which must satisfy that f<sub>n</sub>/f<sub>n+1</sub> -> 0, a series in that functions a<sub>0</sub> + a<sub>1</sub> f<sub>1</sub> + ....
for f<sub>n</sub>(x) = x<sup>n</sup> one gets the usual power series.

Unfortunately they have not the fine property of analytic functions, that equal series also means equal functions.
For example the (L+1)<sup>-1</sup>*(L+2) seems to develop into 2 + 0*L + 0*L<sup>2</sup> + ....

Now if I consider mag(ln(x)) = mag(-ln(x)), then my guess was wrong.

But you showed already above that this is faulty.

For a general comparison of A*B with B*A, I have not found a result.
At least it does not depend on A < B or B < A. For example consider two polynomials
p(x):=x<sup>2</sup>, q(x):=x+1, then p>q
p(q(x))=(x+1)<sup>2</sup> = x<sup>2</sup>+ 2x+1
q(p(x))=x<sup>2</sup>+1
so p*q > q*p (as Hardy, or put E* in front).

But if we take p(x):=x<sup>2</sup>+1, q(x):=0.5x, then also p>q but
p(q(x))=0.25x<sup>2</sup>+1
q(p(x))=0.5x<sup>2</sup>+0.5
so q*p > p*q

It seems that it depends on the "breadth" ....

"Breadth"?

Yes, if p has more monomials than q, it seems that then q*p > p*q. Any counterexamples?
In general, M*N > N*M if M is super-polynomial, and N*M > M*N if M is logarithmic. I'm not sure what the behaviour is when M is polynomial.

But for what N? I guess N polynomial.

It is also interesting to note the behaviour of derivatives: for polynomials, mag(f) - mag(f') = 1, but if f is super-polynomial, then 1 > mag(f) - mag(f') (and the difference is 0 when f=e<sup>x</sup>). When f is sub-polynomial, mag(f) - mag(f') > 1.

hm.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Re: tetration and higher operations

Postby quickfur » Tue Nov 28, 2006 5:37 pm

bo198214 wrote:
quickfur wrote:
It is still true, but the comparison is invalid if you arbitrarily halve it! If you get E*L*1 + E*L*1, you must consider it as 2*E*L*1, otherwise how can you consistently do a lexicographic comparison?? This breaks on all lexicographic comparisons, not just mine!

Ok, I dont want to argue with you what you originally meant.
Consider it simply as an example that here the comparison can not be made as with polynomials. For polynomials you have x<sup>n+eps</sup> > k x<sup>n</sup>, regardless how big k is. That means if the leading exponents differ in two polynomials then the polynomial with the bigger leading exponent is bigger than the other, regardless what coefficients the powers have. For O-comparison simply put an E* in front.

The "coefficient" must be real. The original derivation is e<sup>kx<sup>r</sup></sup> = x<sup>k</sup> o e<sup>x<sup>r</sup></sup> -> k*E*r. The k here is real.

Yes and for s,k reals and arbitrary real k is
E*s > k*E*r for s>r, but it can become invalid that:
E*S > k*E*R for S>R if S or R are not reals.
That demonstrated my example.

Did I explain the comparison method wrongly or something? I still don't understand your example. It's like saying that since 3x > 2x, therefore 3x > 2x + 2x. I don't understand what you were trying to do.

[...]
Is that why you were trying to compute the closest real to a magnitude?

Not exactly. And seems also not very fruitful. Also not fruitful seems my approach to develop into an L series.
If two numbers at the level of reals are equal, for example 1+L ~ 1.
Then one would simply subtract the real part and look how it looks at the first L level, if they are there again equal one again subtracts and looks at the L<sup>2</sup> level and so on we get an L-series:
r<sub>0</sub> + r<sub>1</sub>*L + r<sub>2</sub>*L<sup>2</sup> + ....
Its a bit like in asymptotics theory where you can develop for a given sequence of gauge functions f<sub>n</sub>, which must satisfy that f<sub>n</sub>/f<sub>n+1</sub> -> 0, a series in that functions a<sub>0</sub> + a<sub>1</sub> f<sub>1</sub> + ....
for f<sub>n</sub>(x) = x<sup>n</sup> one gets the usual power series.

Unfortunately they have not the fine property of analytic functions, that equal series also means equal functions.
For example the (L+1)<sup>-1</sup>*(L+2) seems to develop into 2 + 0*L + 0*L<sup>2</sup> + ....

Yeah, I think the algebraic structure of the magnitude numbers is not strong enough. The O-equality criterion leaves out too many details, which causes inconsistencies that need all kinds of special cases to sort out properly. Maybe we should start over from something more solid. :)

Now if I consider mag(ln(x)) = mag(-ln(x)), then my guess was wrong.

But you showed already above that this is faulty.

Hmm. But even if you introduce a complementation to magnitude numbers such that mag(-f) = complement(mag(f)), you still have a problem: mag(f + (-f)) -> max(mag(f), mag(-f)) = mag(f) which is false because f + (-f) = 0.

Also, I think no matter what you do, using O-comparison will always give you this trouble: mag(x<sup>2</sup> + 2x) = 2 = mag(-x<sup>2</sup>), but mag((x<sup>2</sup>+2x) + (-x<sup>2</sup>)) = 1, whereas mag(x<sup>2</sup> + (-x<sup>2</sup>)) = 0. So for two functions f and g, (f+g) can have any magnitude less than or equal to max(f,g) because the + hides those extra terms behind the O-equivalence.

For a general comparison of A*B with B*A, I have not found a result.
At least it does not depend on A < B or B < A. For example consider two polynomials
p(x):=x<sup>2</sup>, q(x):=x+1, then p>q
p(q(x))=(x+1)<sup>2</sup> = x<sup>2</sup>+ 2x+1
q(p(x))=x<sup>2</sup>+1
so p*q > q*p (as Hardy, or put E* in front).

But if we take p(x):=x<sup>2</sup>+1, q(x):=0.5x, then also p>q but
p(q(x))=0.25x<sup>2</sup>+1
q(p(x))=0.5x<sup>2</sup>+0.5
so q*p > p*q

It seems that it depends on the "breadth" ....

"Breadth"?

Yes, if p has more monomials than q, it seems that then q*p > p*q. Any counterexamples?

No, I was just wondering what you meant by "breadth".

In general, M*N > N*M if M is super-polynomial, and N*M > M*N if M is logarithmic. I'm not sure what the behaviour is when M is polynomial.

But for what N? I guess N polynomial.

It seems to also work for non-polynomial N, except that in some cases you get equality instead of strict inequality. But I may have missed some cases.

It is also interesting to note the behaviour of derivatives: for polynomials, mag(f) - mag(f') = 1, but if f is super-polynomial, then 1 > mag(f) - mag(f') (and the difference is 0 when f=e<sup>x</sup>). When f is sub-polynomial, mag(f) - mag(f') > 1.

hm.

e<sup>x</sup> seems to be the "balance" between faster and slower derivatives. Anything growing slower than e<sup>x</sup> has a derivative that grows slower than itself; anything growing faster than e<sup>x</sup> has a derivative that grows faster than itself.
quickfur
Pentonian
 
Posts: 3004
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Next

Return to General

Who is online

Users browsing this forum: No registered users and 1 guest

cron