I recently discovered the other way around, i.e. to define A<sup>s</sup>.
We know the Newton formula (that PWrong mentioned already in this forum):
(x+1)<sup>s</sup> = 1 + (s over 1) x<sup>1</sup> + (s over 2) x<sup>2</sup> + ....
where (s over k) is defined as s(s-1)....(s-k+1)/k!
for natural s=n we have the usual binomial formula because (n over k) is 0 for k>n (because then the term (n-n) occurs in the n(n-1)....(n-k+1)).
We can define this in a similar way for matrices and have then the usual
A<sup>s+t</sup>=A<sup>s</sup> A<sup>t</sup> and A<sup>st</sup> = (A<sup>s</sup>)<sup>t</sup>. Though I am not sure about the convergence of the infinite matrix sum.
Whats for me quite more striking is that we can use these technique to define the composition of analytic functions. You know an (real) analytic function is a function where the Taylor-series has a convergence radius and is equal to the function at this convergence radius. We denote the coefficients of the Taylor series of f with f<sub>i</sub>, i.e. that
f(x) = f<sub>0</sub> + f<sub>1</sub> x + f<sub>2</sub> x<sup>2</sup> + ...
Now it is simple to elaborate formulas for the coefficients of f+g and f*g, namely
(f+g)<sub>i</sub> = f<sub>i</sub> + g<sub>i</sub>
(fg)<sub>n</sub> = f<sub>0</sub> g<sub>n</sub> + ... f<sub>i</sub>g<sub>n-i</sub> ... + f<sub>n</sub>g<sub>0</sub>
But anyone already looked at the (
Faa di Bruno's) formula for function composition (f o g)(x):=f(g(x)) is terrified. (Note that in the given link f<sub>i</sub> = f<sup>(i)</sup>(0)/i! and f<sup>(k)</sup>(g(0)) = f<sup>(k)</sup>(0).)
And here can help us these matrices. Consider simply the (infinite) matrix where the nth row are the coefficients of the nth power of f, where is f<sub>0</sub> is assumed to be 0, and denote it as M<sub>f</sub>.
M<sub>f o g</sub> = M<sub>f</sub> M<sub>g</sub>
The matrices M<sub>f</sub> are triangular, so the actual execution of the matrix multiplication does not involve infinite sums. The first row of the matrix are the coefficients of the function f.
And now the hammer, this can be used to explicit the unique real (or even complex iterates) of the function f. Because
M<sub>f<sup>n</sup></sub> = M<sub>f</sub><sup>n</sup> we can carry this to real composition exponents s (f<sup>n</sup> here never denotes the nth power but always the nth iterate of f).
f<sup>s</sup><sub>n</sub> = (M<sub>f</sub><sup>s</sup>)<sub>1,n</sub> = (s over 1)(M<sub>f</sub>-I)<sub>1,n</sub> + (s over 2)(M<sub>f</sub>-I)<sup>2</sup><sub>1,n</sub> + ... + (s over n-1)(M<sub>f</sub>-I)<sup>n-1</sup><sub>1,n</sub>
We can even compute again the powers (M<sub>f</sub>-I)<sup>k</sup> which then leads to a strange but surprising formula
f<sup>s</sup><sub>n</sub> = sum<sub>i=0...n-1</sub> (-1)<sup>n-1-i</sup> (s over i) (s-1-i over n-1-i) f<sup>i</sup><sub>n</sub>
One can indeed show that f<sup>s</sup> (if it converges) is the only continuous in s and analytic in x solution to the standard iteration problem:
f<sup>1</sup> = f
f<sup>s+t</sup>(x)=f<sup>s</sup>(f<sup>t</sup>(x))
On the other hand for example the iterates of e<sup>x</sup>-1 only converge for integer iteration exponent. On yet another hand there is always (for real analytic functions with fixed point 0) for each iterate an analytic function that approximates it in 0.
Who is interestic in this topic should take a look into the papers of Jabotinsky especially Analytic iteration, Trans. Amer. Math. Soc. 108, 1963, 457-477. He defines there an iteration logarithm L, it has the properties L(fog)=L(f)+L(g) and L(f<sup>s</sup>)=sL(f) and is defined as L(f)=df<sup>s</sup>/ds|(s=0), all properties that are valid for the normal logarithm in a similar way (where f is x and f<sup>s</sup> is x<sup>s</sup>). Even the definition is similar to the normal logarithm:
L(f)<sub>n</sub> = sum<sub>i=1...n-1</sub> (-1)<sup>i+1</sup>/i f<sup>i</sup><sub>n</sub>
And if this logarithm is analytic then we know already that f<sup>s</sup> is analytic for each s.