The exponential power series

Other scientific, philosophical, mathematical etc. topics go here.

The exponential power series

Postby quickfur » Tue Nov 14, 2006 8:15 pm

It is well-known that the exponential function e<sup>x</sup> can be expanded using Taylor series as:

e<sup>x</sup> = 1 + x/1! + x<sup>2</sup>/2! + x<sup>3</sup>/3! + ...

Interestingly enough, a similar series can be defined for square matrices:

e<sup>A</sup> = I + A/1! + (AA)/2! + (AAA)/3! + ...

where I is the identity matrix. This so-called "matrix exponential" has many properties similar to numerical exponentials (although of course it also has some different properties due to the behaviour of matrix multiplication).

What I'm curious about, is how far this power series can be generalized, and still produce an exponential-like function? I.e., what are the requirements on the operations (multiplication and addition) so that the resulting function is well-defined? For example, commutativity of multiplication doesn't seem to be a problem here (matrix multiplication isn't commutative in general), although I suppose associativity would be essential.

Given some arbitrary function of 2 variables, f(x,y), with the property that f(f(x,y),z) = f(x,f(y,z)), will this series have exponential-like properties?

e<sup>f</sup> = 1 + x/1! + f(x,x)/2! + f(f(x,x),x)/3! + ...

What other additional properties of f are required for this to result in a sensible function? Curious minds want to know. :)
quickfur
Pentonian
 
Posts: 3025
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Postby houserichichi » Wed Nov 15, 2006 3:20 am

What kinds of "exponential properties" are you looking for? For some matrices X and Y, it is the case that e<sup>X+Y</sup> = e<sup>X</sup>e<sup>Y</sup> if and only if [X,Y]=0. So the exp function for matrices isn't as nice as it is for real numbers by the simple fact that matrices don't commute.
houserichichi
Tetronian
 
Posts: 590
Joined: Wed May 12, 2004 1:03 am
Location: Canada

Postby bo198214 » Wed Nov 15, 2006 1:36 pm

I recently discovered the other way around, i.e. to define A<sup>s</sup>.
We know the Newton formula (that PWrong mentioned already in this forum):

(x+1)<sup>s</sup> = 1 + (s over 1) x<sup>1</sup> + (s over 2) x<sup>2</sup> + ....

where (s over k) is defined as s(s-1)....(s-k+1)/k!

for natural s=n we have the usual binomial formula because (n over k) is 0 for k>n (because then the term (n-n) occurs in the n(n-1)....(n-k+1)).

We can define this in a similar way for matrices and have then the usual
A<sup>s+t</sup>=A<sup>s</sup> A<sup>t</sup> and A<sup>st</sup> = (A<sup>s</sup>)<sup>t</sup>. Though I am not sure about the convergence of the infinite matrix sum.

Whats for me quite more striking is that we can use these technique to define the composition of analytic functions. You know an (real) analytic function is a function where the Taylor-series has a convergence radius and is equal to the function at this convergence radius. We denote the coefficients of the Taylor series of f with f<sub>i</sub>, i.e. that
f(x) = f<sub>0</sub> + f<sub>1</sub> x + f<sub>2</sub> x<sup>2</sup> + ...

Now it is simple to elaborate formulas for the coefficients of f+g and f*g, namely
(f+g)<sub>i</sub> = f<sub>i</sub> + g<sub>i</sub>
(fg)<sub>n</sub> = f<sub>0</sub> g<sub>n</sub> + ... f<sub>i</sub>g<sub>n-i</sub> ... + f<sub>n</sub>g<sub>0</sub>

But anyone already looked at the (Faa di Bruno's) formula for function composition (f o g)(x):=f(g(x)) is terrified. (Note that in the given link f<sub>i</sub> = f<sup>(i)</sup>(0)/i! and f<sup>(k)</sup>(g(0)) = f<sup>(k)</sup>(0).)

And here can help us these matrices. Consider simply the (infinite) matrix where the nth row are the coefficients of the nth power of f, where is f<sub>0</sub> is assumed to be 0, and denote it as M<sub>f</sub>.
M<sub>f o g</sub> = M<sub>f</sub> M<sub>g</sub>
The matrices M<sub>f</sub> are triangular, so the actual execution of the matrix multiplication does not involve infinite sums. The first row of the matrix are the coefficients of the function f.

And now the hammer, this can be used to explicit the unique real (or even complex iterates) of the function f. Because
M<sub>f<sup>n</sup></sub> = M<sub>f</sub><sup>n</sup> we can carry this to real composition exponents s (f<sup>n</sup> here never denotes the nth power but always the nth iterate of f).

f<sup>s</sup><sub>n</sub> = (M<sub>f</sub><sup>s</sup>)<sub>1,n</sub> = (s over 1)(M<sub>f</sub>-I)<sub>1,n</sub> + (s over 2)(M<sub>f</sub>-I)<sup>2</sup><sub>1,n</sub> + ... + (s over n-1)(M<sub>f</sub>-I)<sup>n-1</sup><sub>1,n</sub>

We can even compute again the powers (M<sub>f</sub>-I)<sup>k</sup> which then leads to a strange but surprising formula

f<sup>s</sup><sub>n</sub> = sum<sub>i=0...n-1</sub> (-1)<sup>n-1-i</sup> (s over i) (s-1-i over n-1-i) f<sup>i</sup><sub>n</sub>

One can indeed show that f<sup>s</sup> (if it converges) is the only continuous in s and analytic in x solution to the standard iteration problem:
f<sup>1</sup> = f
f<sup>s+t</sup>(x)=f<sup>s</sup>(f<sup>t</sup>(x))

On the other hand for example the iterates of e<sup>x</sup>-1 only converge for integer iteration exponent. On yet another hand there is always (for real analytic functions with fixed point 0) for each iterate an analytic function that approximates it in 0.

Who is interestic in this topic should take a look into the papers of Jabotinsky especially Analytic iteration, Trans. Amer. Math. Soc. 108, 1963, 457-477. He defines there an iteration logarithm L, it has the properties L(fog)=L(f)+L(g) and L(f<sup>s</sup>)=sL(f) and is defined as L(f)=df<sup>s</sup>/ds|(s=0), all properties that are valid for the normal logarithm in a similar way (where f is x and f<sup>s</sup> is x<sup>s</sup>). Even the definition is similar to the normal logarithm:

L(f)<sub>n</sub> = sum<sub>i=1...n-1</sub> (-1)<sup>i+1</sup>/i f<sup>i</sup><sub>n</sub>

And if this logarithm is analytic then we know already that f<sup>s</sup> is analytic for each s.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby quickfur » Wed Nov 15, 2006 4:45 pm

bo198214 wrote:I recently discovered the other way around, i.e. to define A<sup>s</sup>.
We know the Newton formula (that PWrong mentioned already in this forum):

(x+1)<sup>s</sup> = 1 + (s over 1) x<sup>1</sup> + (s over 2) x<sup>2</sup> + ....

where (s over k) is defined as s(s-1)....(s-k+1)/k!

for natural s=n we have the usual binomial formula because (n over k) is 0 for k>n (because then the term (n-n) occurs in the n(n-1)....(n-k+1)).

We can define this in a similar way for matrices and have then the usual
A<sup>s+t</sup>=A<sup>s</sup> A<sup>t</sup> and A<sup>st</sup> = (A<sup>s</sup>)<sup>t</sup>. Though I am not sure about the convergence of the infinite matrix sum.

This is very interesting. I wish I took linear algebra a lot more seriously than I did when I was still in school... nowadays I remember the intuitive interpretations of the stuff, but I can't actually do any of it myself anymore.

Whats for me quite more striking is that we can use these technique to define the composition of analytic functions.

Now, this is an area that I'm deeply interested in.

[...]But anyone already looked at the (Faa di Bruno's) formula for function composition (f o g)(x):=f(g(x)) is terrified. (Note that in the given link f<sub>i</sub> = f<sup>(i)</sup>(0)/i! and f<sup>(k)</sup>(g(0)) = f<sup>(k)</sup>(0).)

And here can help us these matrices. Consider simply the (infinite) matrix where the nth row are the coefficients of the nth power of f, where is f<sub>0</sub> is assumed to be 0, and denote it as M<sub>f</sub>.
M<sub>f o g</sub> = M<sub>f</sub> M<sub>g</sub>
The matrices M<sub>f</sub> are triangular, so the actual execution of the matrix multiplication does not involve infinite sums. The first row of the matrix are the coefficients of the function f.

And now the hammer, this can be used to explicit the unique real (or even complex iterates) of the function f. Because
M<sub>f<sup>n</sup></sub> = M<sub>f</sub><sup>n</sup> we can carry this to real composition exponents s (f<sup>n</sup> here never denotes the nth power but always the nth iterate of f).

f<sup>s</sup><sub>n</sub> = (M<sub>f</sub><sup>s</sup>)<sub>1,n</sub> = (s over 1)(M<sub>f</sub>-I)<sub>1,n</sub> + (s over 2)(M<sub>f</sub>-I)<sup>2</sup><sub>1,n</sub> + ... + (s over n-1)(M<sub>f</sub>-I)<sup>n-1</sup><sub>1,n</sub>

Genius! This is absolutely cool.

We can even compute again the powers (M<sub>f</sub>-I)<sup>k</sup> which then leads to a strange but surprising formula

f<sup>s</sup><sub>n</sub> = sum<sub>i=0...n-1</sub> (-1)<sup>n-1-i</sup> (s over i) (s-1-i over n-1-i) f<sup>i</sup><sub>n</sub>

One can indeed show that f<sup>s</sup> (if it converges) is the only continuous in s and analytic in x solution to the standard iteration problem:
f<sup>1</sup> = f
f<sup>s+t</sup>(x)=f<sup>s</sup>(f<sup>t</sup>(x))

This is cool... have you by any chance read Abel's work on iterative functions? Basically, he did much research on solving various problems arising in the area of function iterations and orbits (in the sense of functional orbits, not planetary :)).

On the other hand for example the iterates of e<sup>x</sup>-1 only converge for integer iteration exponent. On yet another hand there is always (for real analytic functions with fixed point 0) for each iterate an analytic function that approximates it in 0.

Who is interestic in this topic should take a look into the papers of Jabotinsky especially Analytic iteration, Trans. Amer. Math. Soc. 108, 1963, 457-477. He defines there an iteration logarithm L, it has the properties L(fog)=L(f)+L(g) and L(f<sup>s</sup>)=sL(f) and is defined as L(f)=df<sup>s</sup>/ds|(s=0), all properties that are valid for the normal logarithm in a similar way (where f is x and f<sup>s</sup> is x<sup>s</sup>). Even the definition is similar to the normal logarithm:

Whoa. That is incredible.

For a long time, I've been interested in generalizing the exponentiation function to tetration (or "power towers" as some people call them, e.g., 2<tet>n = 2<sup>2<sup>2<sup>... (n times)</sup></sup></sup>). This came up in my exploration into infinitesimals and transfinite real numbers as representing the behaviour of univariate functions at x=infinity.

We all know that e<sup>x</sup> grows faster than x<sup>r</sup> for all real numbers r. Similarly, it can easily be seen that the tetration function will grow faster than e<sup>x</sup> (or any finite composition of the exponential function, including such beasts as e<sup>e<sup>e<sup>x</sup></sup></sup>). What is of interest, is what's "in between" these "levels" of growth rate. For example, it is obvious that e<sup>sqrt(x)</sup> will grow slower than e<sup>x</sup>, although e<sup>sqrt(x)</sup> will still grow faster than x<sup>r</sup> for all r. What isn't quite as obvious, is whether there exists a function f(x) such that f(x) grows faster than x<sup>r</sup> for all r, and f(x) grows slower than e<sup>x<sup>s</sup></sup> for all s (including such things as e<sup>x<sup>1/1000000</sup></sup>). In fact, the answer is yes, there are such functions, and they have a closed form representation. What they are is left as an exercise for the reader. ;)

Of course, having found answers to such interesting questions as this, I wanted to look at the tetration function, and explore what would lie "between" the tetration function and the exponentials. One point of interest is that, according to my suspicions, tet(ln x) would actually grow faster than any finite iterated exponential (e<sup>e<sup>...<sup>x</sup></sup></sup>). This may seem a bit counterintuitive, until one realizes just how incredibly fast the tetration function grows. It is so fast that even a logarithm, which is slower than all increasing polynomials, can't slow it down enough to get back to the level of iterated exponentials.
quickfur
Pentonian
 
Posts: 3025
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Postby bo198214 » Wed Nov 15, 2006 8:12 pm

quickfur wrote:have you by any chance read Abel's work on iterative functions?
No, not in the original, I only know how to reduce the iteration question to finding an Abel function.

This came up in my exploration into infinitesimals and transfinite real numbers as representing the behaviour of univariate functions at x=infinity.

Ui, this is an unusual approach to the tetration ...

Of course, having found answers to such interesting questions as this, I wanted to look at the tetration function, and explore what would lie "between" the tetration function and the exponentials. One point of interest is that, according to my suspicions, tet(ln x) would actually grow faster than any finite iterated exponential (e<sup>e<sup>...<sup>x</sup></sup></sup>).

What do you mean by tet(ln x)? That for example (ln x)<sup>ln x<sup>ln x</sup></sup> grows faster than e<sup>e<sup>e<sup>x</sup></sup></sup> (i.e. same tower height). Or that already (ln x)<sup>ln x</sup> grows faster than every e^e^e .... x (which I doubt)? I mean already (ln x)<sup>ln x</sup> grows slower than e<sup>x</sup>?!
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby quickfur » Sun Nov 19, 2006 10:22 pm

bo198214 wrote:
quickfur wrote:have you by any chance read Abel's work on iterative functions?
No, not in the original, I only know how to reduce the iteration question to finding an Abel function.

This came up in my exploration into infinitesimals and transfinite real numbers as representing the behaviour of univariate functions at x=infinity.

Ui, this is an unusual approach to the tetration ...

Well, I wasn't looking for tetration when looking into transfinite real numbers... it just came up naturally after I've established first that the growth of polynomial functions may be represented as real numbers, and the growth of exponential functions may be represented as transfinite numbers. The next question is obviously, is there anything beyond exponentiation? And this immediately leads to the idea of tetration (and beyond).

Of course, having found answers to such interesting questions as this, I wanted to look at the tetration function, and explore what would lie "between" the tetration function and the exponentials. One point of interest is that, according to my suspicions, tet(ln x) would actually grow faster than any finite iterated exponential (e<sup>e<sup>...<sup>x</sup></sup></sup>).

What do you mean by tet(ln x)? That for example (ln x)<sup>ln x<sup>ln x</sup></sup> grows faster than e<sup>e<sup>e<sup>x</sup></sup></sup> (i.e. same tower height). Or that already (ln x)<sup>ln x</sup> grows faster than every e^e^e .... x (which I doubt)? I mean already (ln x)<sup>ln x</sup> grows slower than e<sup>x</sup>?!

tet(ln x) = x^x^x^...(ln x times). It is relatively easy to see that since ln x is unbounded, this will eventually surpass e<sup>x</sup>. I'm still unsure about whether it will surpass anything beyond that, such as e<sup>e<sup>x</sup></sup>, but my suspicion is that it will.
quickfur
Pentonian
 
Posts: 3025
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Postby bo198214 » Mon Nov 20, 2006 12:01 am

quickfur wrote:tet(ln x) = x^x^x^...(ln x times). It is relatively easy to see that since ln x is unbounded, this will eventually surpass e<sup>x</sup>. I'm still unsure about whether it will surpass anything beyond that, such as e<sup>e<sup>x</sup></sup>, but my suspicion is that it will.


Say tl(x)= x^x ....^x, ln(x) times.
Then is tl(e<sup>x</sup>) = e<sup>x</sup> ^ e<sup>x</sup> ^ .... e<sup>x</sup>, x times.
If you want to compare tl(x) with say e^e^x then you simply need to compare tl(e^x) with e^e^e^x. But by the expression above tl(e^x) surpasses every e^....^e^x for x to infinity. And so also tl(x) surpasses every e^....^e^x.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby quickfur » Mon Nov 20, 2006 1:12 am

bo198214 wrote:
quickfur wrote:tet(ln x) = x^x^x^...(ln x times). It is relatively easy to see that since ln x is unbounded, this will eventually surpass e<sup>x</sup>. I'm still unsure about whether it will surpass anything beyond that, such as e<sup>e<sup>x</sup></sup>, but my suspicion is that it will.


Say tl(x)= x^x ....^x, ln(x) times.
Then is tl(e<sup>x</sup>) = e<sup>x</sup> ^ e<sup>x</sup> ^ .... e<sup>x</sup>, x times.
If you want to compare tl(x) with say e^e^x then you simply need to compare tl(e^x) with e^e^e^x. But by the expression above tl(e^x) surpasses every e^....^e^x for x to infinity. And so also tl(x) surpasses every e^....^e^x.

Cool. See, I made that statement as a guess based on the system of magnitudes I've developed (see my post in the other thread), and it turns out to be right. So I've done something right. :)
quickfur
Pentonian
 
Posts: 3025
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North


Return to General

Who is online

Users browsing this forum: No registered users and 0 guests