Category Archives: 3D complex numbers

The Fundamental Theorem of Calculus for the 3D Complex Numbers.

Some time after I started writing this post I thought “Hey why not look up what is more or less offically said on this theorem in the internet”. So I did and found out there are three versions of this theorem that basically says you can do integration with a primitive. That was a tiny surprise to me because the way I remembered it was there is just only one. There are two versions on the real line and one for the complex plane. The first real line version says that the integral of a function on the real line equals the difference that the primitive has in the begin and endpoint of integration.
The second real line version uses a variable in the endpoint of integration, say x, and define a primitive F(x) of a function f(x) as such. After that it must be shown that the derivative F'(x) = f(x). And the third version says you can do line integration (or integration along a curve traditionally named gamma) also using a primitive but now you must take into account the way differentiation works in the complex plane.

So there is not one such fundamental theorem but the official theory says it’s three. Now why three? Very simple: The professional math people know of no other spaces where you can do integration with a primitive. I’ve said it before and repeat it once more: The 4D quaternions are nice things but when it comes to differentiation and integration it is hard to get a bigger mess of total gibberish. That’s why the professional math professors don’t have such a fundamental theorem for the quaternions.

But for the 3D complex numbers that are the main topic of this website, it can also be done. But hey this is now the year 2025 and this website is almost 10 years old and on top of that I found the 3D complex numbers back in the year 1990, so why only now this theorem in the year 2025?
Now over the years I have always used this kind of integration when I needed it. For example the number tau for the 3D complex numbers was calculated the first time by using integration while I developed the matrix diagonalization methods only later to deal with the problems you get in say five or seven dimensional complex numbers.
For people who don’t have it clear what the numbers tau are: They are the logarithm of an imaginary unit. For example the log of the imaginary unit i on the complex plane is i times pi/2 as was already found by the good old Euler. Now for the 3D complex numbers it’s a bit more difficult but you can find such log’s of imaginary numbers indeed with integrating just the inverse.

But I always thought there would be some kind of trouble if you integrate just inside the subspaces of non-invertible numbers. So that’s more or less why I never ever formulated such a fundamental theorem in all those years. I only used integration when I needed it and that was it.

To my excuse there are indeed some subtilities, I once tried to find the primitive of say e^X and yes, no problemo, it is just e^X. While if you calculate the integral now with a 3D number X but multiply it by that famous number alpha, the primitive changes in a dramatic fashion.
So all those years I thought that even for say the exponential function there was not just one primitive to do all the work. Yet now I take a deeper look into it, this was all a bit stupid of me.
So it’s a lame excuse but compared to the professional math professors who can’t even find the complex 3D numbers, I shine as the stable genius I am… Ahum, this post is only 9 pictures long and has 3 additional figures and one video about the fundamental theorem on the real line. So all in all there are 12 pictures and 1 video below.

Lets get this party started:

The next picture is the so called Figure 1 picture and it shows where the determinant is one. So on the red colored graph you can do the ‘divide by beta’ thing in the limit for the derivative of a function. The problems with taking such a limit on the space where det(X) = 0, you can’t divide by such a beta so doesn’t that cause some problems? Well no, you can always flip hin und her between the two above definitions of taking a derivative.

Figure 1: I am not crazy: Can’t divide by beta? Well, try a multiplication by beta…

And here is the so called Figure 2 picture where I depicted a line segment inside the main plane of non-invertible numbers in the 3D complex numbers. Therefore I invite you to think a bit along those lines, does it matter if they are inside or outside the space of det(X) = 0?

I have a link for you to a page from the website from Stephen Wolfram where the three fundamental theorems of calculus are explained.
Fundamental Theorems of Calculus
Now I published the above about 24 hours ago but I was forgotten to place the link to Wolfram. And today I was watching youtube and to my surprise a video from Hannah Fry came floating along while it said it was about the fundamental theorem of calculus. It’s all very very basic because Hannah uses it also to craft an introduction to integration using the limit of an elementary Riemann sum. For most readers it is a bit too simple I guess but in case you harldy know what integration is, for those it is a very good video.

And for no reason at all I also made a cube with her face on it. We all love Hannah because she is relatively good at popularizing math. And that’s a good thing because it makes the general population a bit less stupid. Anyway that is what you might hope for but don’t let you hope become to big because the human brain and math is often not a good combination. We’re just a fucking stupid monkey species, ok we are the smartest monkeys around but we’re still a monkey species…

This is the end of this post, now we have a fourth so called Fundamental theorem of calculus. Lets leave it with that.

Keep it simple: Take the line integral of the identity in two complex spaces.

This is now post number 278 on this website and to be honest the content of this post should have been here years ago. This post is about complex integration and I more or less compare how you do that in the complex plane against how it is done in 3D space.
Integrating the identity simple means integrating zdz on the complex plane and XdX on the space of 3D complex numbers. Of course I have used complex integration in higher dimensional spaces in the past when I needed it. For example this is how I found my first number tau: On the space of 3D complex numbers you must find the logarithm of j^2 (not j because j has a determinant of minus one) and I did that with complex integration.
For people who are new to this website: j denotes the imaginary unit in 3D space and it’s third power equals -1. That mimics the situation in the complex plane where the imaginary unit i if you square it that gives -1.

All these years I never used complex integration just to find a primitive, so that is done in this post. And since we are integrating zdz we expect to find 0.5z^2on the complex plane while on my beloved space of 3D complex numbers the integral of XdX should yield 0.5X^2.
Of course we will find these results because otherwise I would have been very stupid in say the last 15 years. Of course just like every body elso I have been very stupid on many occasions on such long timescales, but not when it comes to 3D complex numbers.

Oops, I see I still have to make the seven png pictures but that won’t take very much time. So that’s this math post: 7 pictures of each 550×1500 pixels in size. I hope that after reading it you can also perform complex integration is say the space of 4D complex numbers.
And now we talk about 4D numbers; I also included at the end the famous quaternions and of course if we try to integrate them we get the usual garbage once more demonstrating that when it comes to differentiation and integration the quaternions are just awful.

That was it for this math post. Likely the next post is another video where the famous Stern Gerlach experiment is explained. Of course in such videos they never jump to the correct conclusion that says it is very likely that electron magnetism is monopole in nature. Just like their electric charge by the way, of course for all professional physics persons the electron has to be a tiny magnet. Not that they have much so called ‘five sigma’ experimental evidence for that, but for them this is not a problem…

Ok, let me hit that button ‘publish website’ and may I thank you for your attention.

Integration and the number alpha.

It’s about time to write this post because the pictures were finished a few days back but I was a bit lazy in the meantime. In this post I only evaluate two line integrals both in some way related to the famous number alpha. And we do it only on the 3D space, there are much more numbers alpha on other spaces but we just do the complex and the circular multiplication in three dimensions.
In this post when it comes to the number alpha we mostly need the one property that alpha is it’s own square. Therefore you can break down all powers of alpha into alpha itself, this is very handy when for example you use a power series. As always X denotes a 3D number and one of the integrals we will look at is the exponential of alpha times X:
exp(alpha * X).
Why do we do this? Well try to find a primitive of the above exponential, that is a bit hard because another property of alpha is that it has no inverse and as such you can’t divide by alpha and so what to do?
The second example is integration of the most standard exponential exp(X) along the main axis of non-invertibles: All real multiples of the number alpha. For me this was a surprising result because it all becomes so much more simple. All in all this post is five pictures long but it is in the 550×1500 pixel size so they are relatively long. Ok that is all I had to say and let me now hang in the five pictures:

That was it for this post, thanks for your attention and may be see you in a future post.

Seven properties of the number alpha.

A long time ago around the time I started this website I had something known as “The seven properties of the number alpha”. But it was all spread out over two websites because until then I wrote the math just on the other website. But over the years I have advocated a few times to look up that stuff as the seven properties of alpha, so I decided to write a new post with the same title. I didn’t copy the old stuff but just made up a new version.
The post is amazingly long; a normal small picture has the size of 550×775 pixels, larger ones I use are up to 550×1100 but now it has grown to 11 pictures of size 550×1500. Likely this is the longest post I have written on math ever.
As always I have to leave a lot out, for example doing integration with the involvement of the number alpha is very interesting. During writing I also remembered that e to the power pi times i times alpha, wasn’t that minus one? Yes but we are not going to do six dimensional numbers or hybrids like the 3D circular numbers and replace the reals by the 2D complex numbers. Nope, in this post only properties from the 3D numbers alpha; the complex and the circular one.
More or less all the basic stuff is in this post; from simple things like alpha has no inverse to more complicated stuff like the relation with the Laplacian operator. I made a new category for this post, it even has the name “7 Properties of Alpha” so that in the future it will be even easier to find.

Lets hope it is worth of your time and lets hope you will find it interesting.

Figure 01: Alpha is the center of the complex exponential.
Figure 02: You can write X as the sum of two non-invertible numbers.

Ok, that was a long read. I want to congratulate you with not falling asleep. I have no plans for a next post although it is tempting to write something about integration along the line of real multiples of the number alpha.

The cousin of the transponent.

Likely in the year 1991 I had figured out that the conjugate of a 3D complex number could be found in the upper row of it’s matrix representation. As such the matrix representation of a conjugate 3D number was just the transpose of the original matrix representation. Just like we have for ordinary complex numbers from the complex plane. And this transpose detail also showed that if you take the conjugate twice you end where you started from. Math people would say if you do it twice, that is the identity operation.
But for the two 2D multiplications we have been looking at in the last couple of months, the method of taking the upper row as a conjugate did not work. I had to do a bit of rethinking and it was not that hard to find a better way of defining the conjugate that worked on all spaces under study since the year 1991. And that method is replace all imaginary units by their inverse.
As such we found the conjugate on 2D spaces like the elliptical and hyperbolic complex planes. And the product of a 2D complex number z with it’s conjugate nicely gives the determinant of the matrix representation. And if you look where this determinant equals one, that nicely gives the complex exponentials on these two spaces: an ellipse and a hyperbole.
Now when I was writing the last math post (that is two posts back because the previous post was about magnetism) I wondered what the matrix representation of the conjugate was on these two complex planes. It could not be the transpose because the conjugates were not the upper rows. And I was curious what it was, it it’s not the transpose what is it? It had to be something that if you do it twice, you do the identity operation…

All in all in this post the math is not very deep or complicated but you must know how te make the conjugate on say the elliptic complex plane. On this plane the imaginary unit i rules the multiplication by i ^2 = -1 + i. So you must be able to find the inverse of the imaginary unit i in order to craft the conjugate. On top of that you must be able to make a matrix representation of this particular conjugate. If you think you can do that or if you don’t do it yourself you will understand how it all works, this post will be an easy read for you.

It turns out that the matrices of the conjugate are not the transpose where you flip all entries of the matrix into the main diagonal. No, these matrix representation have all their entries mirrored in the center of the matrix or equivalently they have all their entries rotated by 180 degrees. That is the main result of this post.

So that’s why I named it the “Cousin of the transponent” although I have to admit that this is a lousy name just like the physics people have with naming the magnetic properties of the electron as “spin”. That’s just a stupid thing to do and that’s why we still don’t have quantum computers.

Enough intro talk done, the post is five pictures long and each picture is 550×1200 pixels. Have fun reading it.


That was it for this post, one more picture is left to see and that is how I showed it on the other website. Here it is:

Ok, this is really the end of this post. Thanks for your attention and may be see you in another post of this website upon complex numbers.

Another way of finding the direction of the number tau.

A bit like in the spirit of Sophus Lie lately I was thinking “Is there another way of finding those tangets at the number 1?”. To focus the mind, if you have an exponential circle or higher dimensional curve, the tangent at 1 is into the direction of the logarithm you want to find.
In the case of 2D and 3D numbers I always want to know the logarithm of imaginary units. A bit more advanced as what all started a long time ago: e^it = cos t + i sin t.
An important feature of those numbers tau that are the sought logs is that taking the conjugate always the negative returns. Just like the in the complex plane the conjugate of i is –i.

The idea is easy to understand: The proces of taking a conjugate of some number is also a linear transformation. These transformations have very simple matrices and there all you do is try to find the eigenvector that comes with eigenvalue -1.
The idea basically is that tau must like in the direction of that eigenvector.

That is what we are going to do in this post, I will give six examples of the matrices that represent the conjugation of a number. And we’ll look at the eigenvectors associated with eigenvalue -1.

At the end I give two examples for 4D numbers and on the one hand you see it is getten a bit more difficult over there. You can get multiple eigenvectors each having the eigenvalue -1. Here this is the case with the complex 4D numbers while their ‘split complex’ version or the circular 4D numbers have not.
Now all in all there are six examples in this post and each is a number set on it’s own. So you must understand them a little bit.
The 2D numbers we look at will be the standard complex plane we all know and love, the elliptic and hyperbolic variants from lately. After that the two main systems for 3D numbers, the complex and circular versions. At last the two 4D multiplications and how to take the conjugate on those spaces.

The post itself is seven pictures long and there are two additional pictures that proudly carry the names “Figure 1” and “Figure 2”. What more do you want? Ok, lets hang in the pictures:

The purple line segment points into the direction of tau.
That’s why 4D split complex numbers are just as boring as their 2D counter parts.

Years ago it dawned on me that the numbers tau in higher dimensional spaces always come in linear combinations of pairs of imaginary units. That clearly emerged from all those calculations I made as say the 7D circular numbers. At the time I never had a simple thing to explain why it always had to be this pair stuff.
So that is one of the reasons to post this simple eigen vector problem: Now I have a very simple so called eigen value problem and if the dimensions grow the solution always come in pairs…

That was it for this post, likely the next post is upon so called ‘frustrated’ magnetism because the lady in the video explains the importance of understand energy when it comes to magnetism. After that may be a new math post on matrix representations of the actual conjugates, so that’s very different from this post that is about the matrices from the process of taking a conjugate…
As always thanks for your attention.

Comparison of the conjugate on five different spaces.

To be a bit precise: I think two spaces are different if they have a different form of multiplication defined on them. Now everybody knows the conjugate, you have some complex number z = x + iy and the conjugate is given by z = x – iy. As such it is very simple to understand; real numbers stay the same under conjugation and if a complex numbers has an imaginary component, that gets flipped in the real axis.

But a long long time ago when I tried to find the conjugate for 3D complex numbers, this simple flip does not work. You only get advanced gibberish so I took a good deep look at it. And I found that the matrix representation of some complex z = x + iy number has an upper row that you can view as the conjugate. So I tried the upper row of my matrices for the 3D complex and circular numbers and voila instead of gibberish for the very first time I found what I at present day name the “Sphere-cone equation”.

I never gave it much thought anymore because it looked like problem solved and this works forever. But a couple of months ago when I discovered those elliptic and hyperbolic versions of 2D numbers, my solution of taking the upper row does not work. It does not work in the sense it produces gibberish so once more I had to find out why I was so utterly stupid one more time. At first I wanted to explain it via exponential curves or as we have them for 2D and 3D complex numbers: a circle that is the complex exponential. And of course what you want in you have some parametrization of that circle, taking the conjugate makes stuff run back in time. Take for example e^it in the standard complex plane where the multiplication is ruled by i^2 = -1. Of course you want the conjugate of
e^it to be e^-it or time running backwards.

But after that it dawned on me there is a more simple explanation that at the same time covers the explanation with complex exponentials (or exponential circles as I name them in low dimensions n = 2, 3). And that more simple thing is that taking the conjugate of any imaginary unit always gives you the inverse of that imaginary unit.

And finding the inverse of imaginary units in low dimensions like 2D or 3D complex numbers is very simple. An important reason as why I look into those elliptic complex 2D numbers lately is the cute fact that if you use the multiplication rule i^2 = -1 + i, in that case the third power is minus one: i^3 = -1. And you do not have to be a genius to find out that the inverse of this imaginary unit i is given by -i^2 .
If you use the idea of the conjugate is the inverse of imaginary units on those elliptic and hyperbolic version of the complex plane, if you multiply z against it’s conjugate you always get the determinant of the matrix representation.
For me this is a small but significant win over the professional math professors who like a broken vinyl record keep on barking out: “The norm of the product is the product of the norms”. Well no no overpaid weirdo’s, it’s always determinants. And because the determinant on the oridinary complex plane is given as x^2 + y^2, that is why the math professors bark their product norm song out for so long.

Anyway because I found this easy way of explaining I was able to cram in five different spaces in just seven images. Now for me it is very easy to jump in my mind from one space to the other but if you are a victim of the evil math professors you only know about the complex plane and may be some quaternion stuff but for the rest you mind is empty. That could cause you having a bit of trouble of jumping between spaces yourself because say 3D circular numbers are not something on the forefront of your brain tissue, in that case only look at what you understand and build upon that.

All that’s left for me to do is to hang in the seven images that make up the math kernel of this post. I made them a tiny bit higher this time, the sizes are 550×1250. A graph of the hyperbolic version of the complex exponential can be found at the seventh image. Have fun reading it and let me hope that you, just like me, have learned a bit from this conjugate stuff.
The picture text already starts wrong: It’s five spaces, not four…

At last I want to remark that the 2D hyperbolic complex numbers are beautiful to see. But why should that be a complex exponential while the split complex numbers from the overpaid math professors does not have a complex exponential?
Well that is because the determinant of the imaginary unit must be +1 and not -1 like we have for those split complex numbers from the overpaid math professors. Lets leave it with that and may I thank you for your attention if you are still awake by now.

Two parametrizations for the ‘unit’ ellipse in the i^2 = -1 + i kind of multiplication.

Basically this post is just two parametrizations of an ellipse, so all in all it should be a total cakewalk… So I don’t know why it took me so long to write it, ok ok there are more hobbies as math competing for my time. But all in all for the level of difficulty it took more time as estimated before.
In the last post we looked at the number tau that is the logarithm for the imaginary unit i and as such I felt obliged to at least base one of the parametrizations on that. So that will be the first parametrization shown in this post.
The second one is a projection of the 3D complex exponential on the xy-plane. So I just left the z-coordinate out and see what kind of ellipse you get when you project the 3D exponential circle on the 2D plane. Acually I did it with the 3D circular multiplication but that makes no difference only the cosines are now more easy to work with. Anyway the surprise was that I got the same ellipse back, so there is clearly a more deeper lying connection between these two spaces (the 3D circular numbers and these 2D complex multiplication defined by i^2 = -1 + i).
A part of the story as why there is a connection between these spaces is of course found into looking at their eigenvalues. And they are the same although 3D complex numbers have of course 3 eigenvalues while the 2D numbers have two eigen values. A lot of people have never done the calculation but the complex plane has all kinds of complex numbers z that each have eigenvalues too…
Anyway I felt that out of this post otherwise it would just become too long to read because all in all it’s now already 10 images. Seven images with math made with LaTex and three additional figures with sceenshots from the DESMOS graphical package.
By the way it has nothing to do with this post but lately I did see a video where a guy claimed he calculated a lot of the Riemann zeta function zero’s with DESMOS. I was like WTF but it is indeed possible, you can only make a finite approximation and the guy used the first 200 terms of the Riemann zeta thing.
At this point in time I have no idea what the next post will be about, may be it’s time for a new magnetism post or whatever what. We’ll wait and see, there will always pop something up because otherwise this would not be post number 254 or so.
Well here is the stuff, I hope you like it or enjoy it.

Figure 1: This parametrization is based on the number tau.
Figure 2: The projection in red, stuff without 1/3 and 2/3 in blue.
Figure 3: The end should read (t – 1.5) but I was to lazy to repair it.

That was it for this post, of course one of the reasons to write is that I could now file it under the two categories “3D complex numbers” and “2D multiplications” because we now have some connection going on here.
And I also need some more posts related to 3D complex numbers because some time ago I found out that the total number of posts on magnetism would exceed those of the 3D complex numbers.

And we can’t have that of course, the goal of starting this website was to promote 3D complex numbers via offering all kinds of insights of how to look at them. The math professors had a big failure on that because about 150 years since Hamilton they shout that they can’t find the 3D complex numbers. Ok ok, they also want it as a field where any non-zero number is invertible and that shows they just don’t know what they are talking about.
The 3D complex numbers are interesting simply because they have all those non-invertible numbers in them.

It is time to split my dear reader so we can both go our own way so I want to thank you for your attention.

General Theory Part 3: Cauchy-Riemann equations.

There are many ways to introduce CR-equations for higher dimensional complex and circular numbers. For example you could remark that if you have a function, say f(X), defined on a higher dimensional number space, it’s Jacobian matrix should nicely follow the matrix representation of that particular higher dimensional number space.
I didn’t do that, I tried to formulate in what I name CR-equations chain rule style. A long time ago and I did not remember what text it was but it was an old text from Riemann and it occured he wrote the equations also chain rule style. That was very refreshing to me and it showed also that I am still not 100% crazy…;)
Even if you know nothing or almost nothing about say 3D complex numbers and you only have a bit of math knowledge about the complex plane, the way Riemann wrote it is very easy to understand. Say you have a function f(z) defined on the complex plane and as usual we write z = x + iy for the complex number, likely you know that the derivative f'(z) is found by a partial differentiation to the real variable x. But what happens if you take the partial differential to the variable y?
That is how Rieman formulated it in that old text: you get f'(z) times i. And that is of course just a simple application of the chain rule that you know from the real line. And that is also the way I mostly wrote it because if you express it only in the diverse partial differentials, that is a lot of work in my Latex math typing environment and for you as a reader it is hard to read and understand what is going on. In the case of 3D complex or circular numbers you already have 9 partial differentials that fall apart into three groups of three differentials each.
In this post I tried much more to hang on to how differentiation was orginally formulated, of course I don’t do it in the ways Newton and Leibniz did it with infitesimals and so on but in a good old limit.
And in order to formulate it in limits I constantly need to divide by vectors from higher dimensional real spaces like 3D, 4D or now in the general case n-dimensional numbers. That should serve as an antidote to what a lot of math professors think: You cannot divide by a vector.
Well may be they can’t but I can and I am very satisfied with it. Apperently for the math professors it is too difficult to define multiplications on higher dimensional spaces that do the trick. (Don’t try to do that with say Clifford algebra’s, they are indeed higher dimensional but as always professional math professors turn the stuff into crap and indeed on Clifford algebra’s you can’t divide most of the time.)

May be I should have given more examples or work them out a bit more but the text was already rather long. It is six pictures and picture size is 550×1100 so that is relatively long but I used a somehow larger font so it should read a bit faster.

Of course the most important feature of the CR-equations is that in case a function defined on a higher dimensional space obeys them, you can differentiate just like you do on the real line. Just like we say that on the complex plane the derivative of f(z) = z^2 is given by f'(z) = 2z. Basically all functions that are analytic on the real line can be expanded into arbitrary dimension, for example the sine and cosine funtions live in every dimension. Not that math professors have only an infitesimal amount of interest into stuff like that, but I like it.
Here are the six pictures that compose this post, I hope it is comprihensible enough and more or less typo free:

Ok that was it, thanks for your attention and I hope that in some point in your future life you have some value to this kind of math.

General theory Part 2: On a matrix named big E.

On this entire website when I talked about a matrix representation it was always meant as a representation that mimics the multiplipaction on a particular space as say the 3D complex numbers. And making such matrices has the benefit you can apply all kinds of linear algebra like matrix diagonalization, or finding eigenvalues (the eigenvalue functions) and so on and so on. So the matrix representation was always the representation of a higher dimensional number.
Big E is very different, this matrix describes the multiplication itself. As such it contains all possible products of two basis vectors and since this is supposed general theory I wrote it in the form of an nxn matrix. For people who like writing computer code, if you can implement this thing properly you can make all kinds of changes to the multiplication. As a matter of fact you can choose whatever you want the product of two basis vectors to be. So in that sense it is much more general as just the complex or the circular multiplication.
I do not like writing computer code myself that much but I can perfectly understand people who like to write that. After all every now and then even I use programs like PARI and without people that like to write code such free programs are just not there.
The math in this post is highly descriptive, it is the kind of math that I do not like most of the time but now I finally wrote this matrix down it was fun to do. If you are just interested in some fixed number space as say the 3D or 4D complex numbers, this concept of big E is not very useful. It is handy when you want to compare a lot of different multiplication in the same dimension and as such it could be a tool that comes in handy.

The entries of this matrix big E are the products of two basis vectors so this is very different from your usual matrix that often only contain real numbers or in more advanced cases complex numbers from the complex plane. I think it could lead to some trouble if you try to write code where the matrix entries are vectors, an alternative would be to represent big E as n square nxn matrices but that makes it a bit less overseeable.

May be in linear algebra you have seen so called quadratic forms using a symmetric matrix. It is a way to represent all quadratic polymonials in n variables. Big E looks a lot like that only you now have vectors as entries.

I did choose the number 1 to be the very first basis vector, so that would give the real line in that particular space. Of course one of the interesting details is that all analytic functions that you know from the real line can easily be extended to all other spaces you want. For example the sine and cosine or the exponential function live in all kinds of spaces in all kinds of dimensions. As such it is much much broader as only a sine on the real line and the complex plane.
This post is five pictures long each 550×1100 pixels. I made them a bit larger because I use a larger font compared to a lot old posts. There are hardly mathematical results in this post because it is so descriptive. Compare it to the defenition of what a group is without many examples, the definition is often boring to read and only comes alive when you see good examples of the math involved.

If you want to try yourself and do a bit of complex analysis on higher dimensional spaces, ensure your big E matrix is symmetric. In that case the multiplication commutes, that is AB = BA always. If you also ensure all basis vectors are invertible you can find the so called Cauchy-Riemann equations for your particular multiplication. Once you have your set of CR equations you can differentiate all you want and also define line integrals (line integral are actually along a curve but that does not matter).
A simple counter example would the 4D quaternions, they do not commute and as such it is not possible to conduct any meaningful complex analysis on the space of quaternions.
End of this post and thanks for your attention.