bo198214 wrote:This is in continuation of the thread The exponential power series. It should also be mentioned that there exists already the thread Tetration and other large number sequences.quickfur wrote:Now, this is an area that I'm deeply interested in.
Hehey, we are beating the 3, regardless whether as number of dimensions or as number of operations!
Heh. Never thought of it that way.
For a long time, I've been interested in generalizing the exponentiation function to tetration
.
Ok, here I should add some notes.
1. Recently Galidakis
showed that there is a continuous (even arbitrary times differentiable) extension of the higher operations defined by the original Ackermann function. The problem with the extension is that it is quite arbitrary, a uniqueness criterion is lacking.
Interesting. Fortunately, at least in the scope of my original considerations, uniqueness is not a big problem, because the relative behaviour of functions at x->infinity is governed by the large-scale behaviour rather than the details (which determines uniqueness). As far as I'm concerned, I could just as well consider functions from the natural numbers to real numbers and my analyses would still yield similar results. (More on this later.)
2. I dont see a reasonable connection, regarding a continuous extension, between the extension of x^(x^...) and the extension of e^(e^....x). The first one has nearly no clues how to define for example half a tetration, while the latter has a clue, i.e. f(f(x)) = e<sup>x</sup>.
Yes, the breakdown of associativity causes a lot of conventional properties to be absent.
Sometimes I wonder, though, if our expectation of such conventional behaviour could be the reason we haven't even the vaguest idea how to visualize, say, a non-constructible set implied by the axiom of choice. (For example, the nature of the infinite convolutions in a Banach-Tarski paradoxical decomposition.)
3. If we would of course define f(x):=x<sup>x</sup> then we could try to find the analytic iterates (as introduced in my previous post), because f is analytic and has a fixed point at 1. Look at:
f(f(x)) = (x<sup>x</sup>)<sup>x<sup>x</sup></sup> = x<sup>x<sup>x</sup>x</sup>
f(f(f(x))) = x<sup>x<sup>x<sup>x</sup>x</sup></sup><sup>x<sup>x</sup>x</sup>
We see that the tower is n+1 high, for n iterations of f. Iteration of f can also be done by iteration of g(x):=xe<sup>x</sup> (which is the inversion of the famous
Lambert W function), if we notice that g = ln o f o exp and thatswhy g<sup>n</sup> = ln o f<sup>n</sup> o exp. g has even the desired g(0)=0 standard fixed point. The analytic iterate should be unique, though I didnt elabortate yet whether the previously given analytic iteration formula converges for g at 0. If not then at least for each iterate is a unique analytic function defined on x>0 that approximates the analytic iteration power series in 0.
The iteration of f was indeed regarded already by Frappier in "Iteration of a kind of exponentials", 1990, though it seems he didnt show neither uniqueness nor analyticity of the iterates. So this work shall be done by the one or another.
Hmm. You're right, a continuous extension of tetration would be nontrivial. Hmm. In my case, though, all I need is
one such extension with the property that its values at natural numbers coincide with the tetration of naturals.
4. e<sup>x</sup> has a family of real analytic iterates (as showed by Kneser already 1949), unfortunately these are not unique either, because the development is around a complex fixed point. The quite reputable mathematician Szekeres devoted much time to finding a "best" iterate, but I would say still unsuccessfully (see his "Abel's equations and regular growth ....", 1998 (and notice that he was born 1912!)).
Interesting.
Why havent we such uniqueness problems for the lower operations multiplication, and exponentiation?
The short answer is because (nm)x = n(mx) and x<sup>nm</sup> = (x<sup>n</sup>)<sup>m</sup>. Because by this law we have a clue how to define (1/n)x or x<sup>1/n</sup>. This law should then be also valid on the extension the rational numbers: x=x<sup>(1/n)n</sup> = (x<sup>1/n</sup>)<sup>n</sup> and hence it is clear that x<sup>1/n</sup> must be the inversion function of x<sup>n</sup>. Similarly for the multiplication, though not so interesting.
Unfortunately this law is no more valid for tetration, i.e. usually x^^(nm) != (x^^n)^^m, because the parenthesizing is different (which matters for x<sup>y</sup> in contrary to multiplication and addition, which are associative).
Yeah, non-associative functions are really, really hard to handle. I wish somebody would explore this more and relate it in terms that can be understood without requiring a PhD in math.
To come out of this dilemma I invented a new number domain "arborescent numbers" in which this law is valid for all higher operations. And I am frolicking that one time all the higher operations have a unique continous extension there. The current state of my research can be found
here.
Very interesting. I haven't taken a closer look at this yet, but this is certainly a very interesting way to approach the problem.
Anyway, since we're sharing research here, maybe I can take the liberty to talk about how I independently "discovered" transfinite and infinitesimal real numbers long before I knew about the research surrounding hyperreals.
The motivation behind all of this came from computer science classes talking about the complexity of algorithms: O(n), O(log n), O(n<sup>2</sup>), etc.. Obviously, these are closely related to the corresponding real functions x, ln x, x<sup>2</sup>, etc.. I learned that O(log n) grows slower than O(sqrt(n)), and immediately the question arises: at what point does x<sup>1/n</sup> become slower than ln x? Then I learned that O(2<sup>n</sup>) is much worse than O(n<sup>3</sup>) or O(n<sup>4</sup>), and again, the question arises: at what point does x<sup>n</sup> surpass 2<sup>x</sup>?
Of course, the answer is that ln x is sub-polynomial, in that it is asymptotically slower than x<sup>s</sup> for
all s>0 (where s is any positive real number), so you could take x<sup>0.0000001</sup> and it still would exceed ln x eventually. Correspondingly, 2<sup>x</sup> (or any exponential, for that matter), is super-polynomial and is asymptotically faster than all x<sup>s</sup> for all real s, so even 1.000000001<sup>x</sup> would eventually outgrow x<sup>100000000</sup>.
Now, this is just the beginning of the fun. It can be shown that all polynomials of degree d behaves asymptotically like x<sup>d</sup>, and this can be generalized to real d. Because of this, I chose to represent the "magnitude" of the polynomial as the real number d. Accordingly, (generalized) polynomials with negative degree would have negative magnitudes, which is interpreted as the rate at which it shrinks to 0 at x=infinity. In the same vein, e<sup>-x</sup> is a super-polynomial negative magnitude, which shrinks to 0 much faster than any inverse power x<sup>1/d</sup>. Dually, this means that 1/ln x is a slower shrinking function than any inverse power of x.
What gets more interesting is what happens with the magnitude when you multiply functions. Multiplying x<sup>r</sup> by x<sup>s</sup> yields x<sup>r+s</sup>, which has degree (r+s). This result can be proven to work for all generalized polynomials as well. Furthermore, for generalized polynomials, function composition corresponds with the multiplication of the degree: for polynomials p(x) and q(x) with degrees m and n, p(q(x)) behaves like a polynomial with degree m*n. So we may say that the magnitudes of polynomial functions correspond with the real numbers with addition and multiplication.
Bringing exponentials and logarithms into the picture, then, we may regard the magnitude of ln x, which I denote as lambda (I'll use L here to get around phpBB's limitation), as an "infinitesimal", since L is less than r for all real numbers r>0. (L lies between 0 and all positive real numbers.) Similarly, the magnitude of e<sup>x</sup>, E (or xi), is an "infinite number". L and E are inverses, since the composition of the logarithm and the exponential is the constant function, L*E = 0. (0 denotes the magnitude of the polynomial with degree 0, i.e., the constant function).
Now, the "multiplication" of magnitudes is not a single-valued function for non-polynomial magnitudes, so the correspondence should not be misconstrued to be an isomorphism. Also, it is non-commutative, since function composition is non-commutative. Regardless, some interesting things can be observed: ln(ln x) corresponds with L*L, and it should be easy to see that L*L is less than L (i.e., ln(ln x) grows slower than ln x), and, in general, L<sup>n</sup> is less than L<sup>m</sup> if n > m. Also, it is relatively easily proven that (ln x)<sup>n</sup> is still sub-polynomial, although greater than ln x. Hence, there is a whole hierarchy of sub-polynomial magnitudes, all of which lie between 0 and all positive real numbers. It can be easily proven that x ln x, which corresponds with (1+L), lies between 1 and all reals greater than 1. Similar results can be obtained for the magnitudes related to L. These must therefore correspond with infinitesimals.
The exponentials are a lot more interesting. It is easily seen that exponentiating a polynomial causes its non-degree terms to become asymptotically significant: x<sup>2</sup> and x<sup>2</sup>+x behave the same asymptotically, but e<sup>x<sup>2</sup></sup> is asymptotically less than e<sup>x<sup>2</sup>+x</sup>. (In this sense, the "multiplication" of magnitudes is multivalued when exponentials are involved.) The exponential e<sup>x</sup> is not at all the "smallest" infinite magnitude, since compositions such as e<sup>(ln x)^2</sup> is smaller than all e<sup>x<sup>r</sup></sup> for all positive r, yet it is super-polynomial.
Now, the L's are by no means the only infinitesimals... the inverse of tetration, which corresponds with the iterated log in statistics, yields a sub-polynomial magnitude that is "infinitely" smaller than any magnitude derived from L. Since Ackermann has proven to us that there is an infinite hierarchy of iterated operations, there must correspondingly be an infinite hierarchy of inverses which yield an infinite hierarchy of infinitesimal magnitudes. In fact, the diagonal Ackermann function A'(n) = A(n,n) is a "super operation" whose magnitude must be greater than the magnitudes of all the iterated operations. The inverse of A'(n) would yield a magnitude so small it is less than all infinitesimal magnitudes corresponding with the iterated operations.
This can be continued indefinitely: one could iterate A'(n) to generate greater functions, and then diagonalize them to generate A''(n), a super-Ackermann function, then iterate *that*, ad infinitum. These would generate their corresponding inverses, which would be functions converging more and more infinitely slower to 0.
I highly suspect that the magnitudes generated this way would result in a set significantly larger than the set of real numbers. But one can never be too sure about anything when dealing with uncountable cardinals without actually working through the proofs, so who knows.