Monthly Archives: October 2023

General Theory Part 3: Cauchy-Riemann equations.

There are many ways to introduce CR-equations for higher dimensional complex and circular numbers. For example you could remark that if you have a function, say f(X), defined on a higher dimensional number space, it’s Jacobian matrix should nicely follow the matrix representation of that particular higher dimensional number space.
I didn’t do that, I tried to formulate in what I name CR-equations chain rule style. A long time ago and I did not remember what text it was but it was an old text from Riemann and it occured he wrote the equations also chain rule style. That was very refreshing to me and it showed also that I am still not 100% crazy…;)
Even if you know nothing or almost nothing about say 3D complex numbers and you only have a bit of math knowledge about the complex plane, the way Riemann wrote it is very easy to understand. Say you have a function f(z) defined on the complex plane and as usual we write z = x + iy for the complex number, likely you know that the derivative f'(z) is found by a partial differentiation to the real variable x. But what happens if you take the partial differential to the variable y?
That is how Rieman formulated it in that old text: you get f'(z) times i. And that is of course just a simple application of the chain rule that you know from the real line. And that is also the way I mostly wrote it because if you express it only in the diverse partial differentials, that is a lot of work in my Latex math typing environment and for you as a reader it is hard to read and understand what is going on. In the case of 3D complex or circular numbers you already have 9 partial differentials that fall apart into three groups of three differentials each.
In this post I tried much more to hang on to how differentiation was orginally formulated, of course I don’t do it in the ways Newton and Leibniz did it with infitesimals and so on but in a good old limit.
And in order to formulate it in limits I constantly need to divide by vectors from higher dimensional real spaces like 3D, 4D or now in the general case n-dimensional numbers. That should serve as an antidote to what a lot of math professors think: You cannot divide by a vector.
Well may be they can’t but I can and I am very satisfied with it. Apperently for the math professors it is too difficult to define multiplications on higher dimensional spaces that do the trick. (Don’t try to do that with say Clifford algebra’s, they are indeed higher dimensional but as always professional math professors turn the stuff into crap and indeed on Clifford algebra’s you can’t divide most of the time.)

May be I should have given more examples or work them out a bit more but the text was already rather long. It is six pictures and picture size is 550×1100 so that is relatively long but I used a somehow larger font so it should read a bit faster.

Of course the most important feature of the CR-equations is that in case a function defined on a higher dimensional space obeys them, you can differentiate just like you do on the real line. Just like we say that on the complex plane the derivative of f(z) = z^2 is given by f'(z) = 2z. Basically all functions that are analytic on the real line can be expanded into arbitrary dimension, for example the sine and cosine funtions live in every dimension. Not that math professors have only an infitesimal amount of interest into stuff like that, but I like it.
Here are the six pictures that compose this post, I hope it is comprihensible enough and more or less typo free:

Ok that was it, thanks for your attention and I hope that in some point in your future life you have some value to this kind of math.

General theory Part 2: On a matrix named big E.

On this entire website when I talked about a matrix representation it was always meant as a representation that mimics the multiplipaction on a particular space as say the 3D complex numbers. And making such matrices has the benefit you can apply all kinds of linear algebra like matrix diagonalization, or finding eigenvalues (the eigenvalue functions) and so on and so on. So the matrix representation was always the representation of a higher dimensional number.
Big E is very different, this matrix describes the multiplication itself. As such it contains all possible products of two basis vectors and since this is supposed general theory I wrote it in the form of an nxn matrix. For people who like writing computer code, if you can implement this thing properly you can make all kinds of changes to the multiplication. As a matter of fact you can choose whatever you want the product of two basis vectors to be. So in that sense it is much more general as just the complex or the circular multiplication.
I do not like writing computer code myself that much but I can perfectly understand people who like to write that. After all every now and then even I use programs like PARI and without people that like to write code such free programs are just not there.
The math in this post is highly descriptive, it is the kind of math that I do not like most of the time but now I finally wrote this matrix down it was fun to do. If you are just interested in some fixed number space as say the 3D or 4D complex numbers, this concept of big E is not very useful. It is handy when you want to compare a lot of different multiplication in the same dimension and as such it could be a tool that comes in handy.

The entries of this matrix big E are the products of two basis vectors so this is very different from your usual matrix that often only contain real numbers or in more advanced cases complex numbers from the complex plane. I think it could lead to some trouble if you try to write code where the matrix entries are vectors, an alternative would be to represent big E as n square nxn matrices but that makes it a bit less overseeable.

May be in linear algebra you have seen so called quadratic forms using a symmetric matrix. It is a way to represent all quadratic polymonials in n variables. Big E looks a lot like that only you now have vectors as entries.

I did choose the number 1 to be the very first basis vector, so that would give the real line in that particular space. Of course one of the interesting details is that all analytic functions that you know from the real line can easily be extended to all other spaces you want. For example the sine and cosine or the exponential function live in all kinds of spaces in all kinds of dimensions. As such it is much much broader as only a sine on the real line and the complex plane.
This post is five pictures long each 550×1100 pixels. I made them a bit larger because I use a larger font compared to a lot old posts. There are hardly mathematical results in this post because it is so descriptive. Compare it to the defenition of what a group is without many examples, the definition is often boring to read and only comes alive when you see good examples of the math involved.

If you want to try yourself and do a bit of complex analysis on higher dimensional spaces, ensure your big E matrix is symmetric. In that case the multiplication commutes, that is AB = BA always. If you also ensure all basis vectors are invertible you can find the so called Cauchy-Riemann equations for your particular multiplication. Once you have your set of CR equations you can differentiate all you want and also define line integrals (line integral are actually along a curve but that does not matter).
A simple counter example would the 4D quaternions, they do not commute and as such it is not possible to conduct any meaningful complex analysis on the space of quaternions.
End of this post and thanks for your attention.