A long time ago around the time I started this website I had something known as “The seven properties of the number alpha”. But it was all spread out over two websites because until then I wrote the math just on the other website. But over the years I have advocated a few times to look up that stuff as the seven properties of alpha, so I decided to write a new post with the same title. I didn’t copy the old stuff but just made up a new version. The post is amazingly long; a normal small picture has the size of 550×775 pixels, larger ones I use are up to 550×1100 but now it has grown to 11 pictures of size 550×1500. Likely this is the longest post I have written on math ever. As always I have to leave a lot out, for example doing integration with the involvement of the number alpha is very interesting. During writing I also remembered that e to the power pi times i times alpha, wasn’t that minus one? Yes but we are not going to do six dimensional numbers or hybrids like the 3D circular numbers and replace the reals by the 2D complex numbers. Nope, in this post only properties from the 3D numbers alpha; the complex and the circular one. More or less all the basic stuff is in this post; from simple things like alpha has no inverse to more complicated stuff like the relation with the Laplacian operator. I made a new category for this post, it even has the name “7 Properties of Alpha” so that in the future it will be even easier to find.
Lets hope it is worth of your time and lets hope you will find it interesting.
Ok, that was a long read. I want to congratulate you with not falling asleep. I have no plans for a next post although it is tempting to write something about integration along the line of real multiples of the number alpha.
Likely in the year 1991 I had figured out that the conjugate of a 3D complex number could be found in the upper row of it’s matrix representation. As such the matrix representation of a conjugate 3D number was just the transpose of the original matrix representation. Just like we have for ordinary complex numbers from the complex plane. And this transpose detail also showed that if you take the conjugate twice you end where you started from. Math people would say if you do it twice, that is the identity operation. But for the two 2D multiplications we have been looking at in the last couple of months, the method of taking the upper row as a conjugate did not work. I had to do a bit of rethinking and it was not that hard to find a better way of defining the conjugate that worked on all spaces under study since the year 1991. And that method is replace all imaginary units by their inverse. As such we found the conjugate on 2D spaces like the elliptical and hyperbolic complex planes. And the product of a 2D complex number z with it’s conjugate nicely gives the determinant of the matrix representation. And if you look where this determinant equals one, that nicely gives the complex exponentials on these two spaces: an ellipse and a hyperbole. Now when I was writing the last math post (that is two posts back because the previous post was about magnetism) I wondered what the matrix representation of the conjugate was on these two complex planes. It could not be the transpose because the conjugates were not the upper rows. And I was curious what it was, it it’s not the transpose what is it? It had to be something that if you do it twice, you do the identity operation…
All in all in this post the math is not very deep or complicated but you must know how te make the conjugate on say the elliptic complex plane. On this plane the imaginary unit i rules the multiplication by i ^2 = -1 + i. So you must be able to find the inverse of the imaginary unit i in order to craft the conjugate. On top of that you must be able to make a matrix representation of this particular conjugate. If you think you can do that or if you don’t do it yourself you will understand how it all works, this post will be an easy read for you.
It turns out that the matrices of the conjugate are not the transpose where you flip all entries of the matrix into the main diagonal. No, these matrix representation have all their entries mirrored in the center of the matrix or equivalently they have all their entries rotated by 180 degrees. That is the main result of this post.
So that’s why I named it the “Cousin of the transponent” although I have to admit that this is a lousy name just like the physics people have with naming the magnetic properties of the electron as “spin”. That’s just a stupid thing to do and that’s why we still don’t have quantum computers.
Enough intro talk done, the post is five pictures long and each picture is 550×1200 pixels. Have fun reading it.
That was it for this post, one more picture is left to see and that is how I showed it on the other website. Here it is:
Ok, this is really the end of this post. Thanks for your attention and may be see you in another post of this website upon complex numbers.
A bit like in the spirit of Sophus Lie lately I was thinking “Is there another way of finding those tangets at the number 1?”. To focus the mind, if you have an exponential circle or higher dimensional curve, the tangent at 1 is into the direction of the logarithm you want to find. In the case of 2D and 3D numbers I always want to know the logarithm of imaginary units. A bit more advanced as what all started a long time ago: e^it = cos t + i sin t. An important feature of those numbers tau that are the sought logs is that taking the conjugate always the negative returns. Just like the in the complex plane the conjugate of i is –i.
The idea is easy to understand: The proces of taking a conjugate of some number is also a linear transformation. These transformations have very simple matrices and there all you do is try to find the eigenvector that comes with eigenvalue -1. The idea basically is that tau must like in the direction of that eigenvector.
That is what we are going to do in this post, I will give six examples of the matrices that represent the conjugation of a number. And we’ll look at the eigenvectors associated with eigenvalue -1.
At the end I give two examples for 4D numbers and on the one hand you see it is getten a bit more difficult over there. You can get multiple eigenvectors each having the eigenvalue -1. Here this is the case with the complex 4D numbers while their ‘split complex’ version or the circular 4D numbers have not. Now all in all there are six examples in this post and each is a number set on it’s own. So you must understand them a little bit. The 2D numbers we look at will be the standard complex plane we all know and love, the elliptic and hyperbolic variants from lately. After that the two main systems for 3D numbers, the complex and circular versions. At last the two 4D multiplications and how to take the conjugate on those spaces.
The post itself is seven pictures long and there are two additional pictures that proudly carry the names “Figure 1” and “Figure 2”. What more do you want? Ok, lets hang in the pictures:
Years ago it dawned on me that the numbers tau in higher dimensional spaces always come in linear combinations of pairs of imaginary units. That clearly emerged from all those calculations I made as say the 7D circular numbers. At the time I never had a simple thing to explain why it always had to be this pair stuff. So that is one of the reasons to post this simple eigen vector problem: Now I have a very simple so called eigen value problem and if the dimensions grow the solution always come in pairs…
That was it for this post, likely the next post is upon so called ‘frustrated’ magnetism because the lady in the video explains the importance of understand energy when it comes to magnetism. After that may be a new math post on matrix representations of the actual conjugates, so that’s very different from this post that is about the matrices from the process of taking a conjugate… As always thanks for your attention.
To be a bit precise: I think two spaces are different if they have a different form of multiplication defined on them. Now everybody knows the conjugate, you have some complex number z = x + iy and the conjugate is given by z = x – iy. As such it is very simple to understand; real numbers stay the same under conjugation and if a complex numbers has an imaginary component, that gets flipped in the real axis.
But a long long time ago when I tried to find the conjugate for 3D complex numbers, this simple flip does not work. You only get advanced gibberish so I took a good deep look at it. And I found that the matrix representation of some complex z = x + iy number has an upper row that you can view as the conjugate. So I tried the upper row of my matrices for the 3D complex and circular numbers and voila instead of gibberish for the very first time I found what I at present day name the “Sphere-cone equation”.
I never gave it much thought anymore because it looked like problem solved and this works forever. But a couple of months ago when I discovered those elliptic and hyperbolic versions of 2D numbers, my solution of taking the upper row does not work. It does not work in the sense it produces gibberish so once more I had to find out why I was so utterly stupid one more time. At first I wanted to explain it via exponential curves or as we have them for 2D and 3D complex numbers: a circle that is the complex exponential. And of course what you want in you have some parametrization of that circle, taking the conjugate makes stuff run back in time. Take for example e^it in the standard complex plane where the multiplication is ruled by i^2 = -1. Of course you want the conjugate of e^it to be e^-it or time running backwards.
But after that it dawned on me there is a more simple explanation that at the same time covers the explanation with complex exponentials (or exponential circles as I name them in low dimensions n = 2, 3). And that more simple thing is that taking the conjugate of any imaginary unit always gives you the inverse of that imaginary unit.
And finding the inverse of imaginary units in low dimensions like 2D or 3D complex numbers is very simple. An important reason as why I look into those elliptic complex 2D numbers lately is the cute fact that if you use the multiplication rule i^2 = -1 + i, in that case the third power is minus one: i^3 = -1. And you do not have to be a genius to find out that the inverse of this imaginary unit i is given by -i^2 . If you use the idea of the conjugate is the inverse of imaginary units on those elliptic and hyperbolic version of the complex plane, if you multiply z against it’s conjugate you always get the determinant of the matrix representation. For me this is a small but significant win over the professional math professors who like a broken vinyl record keep on barking out: “The norm of the product is the product of the norms”. Well no no overpaid weirdo’s, it’s always determinants. And because the determinant on the oridinary complex plane is given as x^2 + y^2, that is why the math professors bark their product norm song out for so long.
Anyway because I found this easy way of explaining I was able to cram in five different spaces in just seven images. Now for me it is very easy to jump in my mind from one space to the other but if you are a victim of the evil math professors you only know about the complex plane and may be some quaternion stuff but for the rest you mind is empty. That could cause you having a bit of trouble of jumping between spaces yourself because say 3D circular numbers are not something on the forefront of your brain tissue, in that case only look at what you understand and build upon that.
All that’s left for me to do is to hang in the seven images that make up the math kernel of this post. I made them a tiny bit higher this time, the sizes are 550×1250. A graph of the hyperbolic version of the complex exponential can be found at the seventh image. Have fun reading it and let me hope that you, just like me, have learned a bit from this conjugate stuff. The picture text already starts wrong: It’s five spaces, not four…
At last I want to remark that the 2D hyperbolic complex numbers are beautiful to see. But why should that be a complex exponential while the split complex numbers from the overpaid math professors does not have a complex exponential? Well that is because the determinant of the imaginary unit must be +1 and not -1 like we have for those split complex numbers from the overpaid math professors. Lets leave it with that and may I thank you for your attention if you are still awake by now.
Basically this post is just two parametrizations of an ellipse, so all in all it should be a total cakewalk… So I don’t know why it took me so long to write it, ok ok there are more hobbies as math competing for my time. But all in all for the level of difficulty it took more time as estimated before. In the last post we looked at the number tau that is the logarithm for the imaginary unit i and as such I felt obliged to at least base one of the parametrizations on that. So that will be the first parametrization shown in this post. The second one is a projection of the 3D complex exponential on the xy-plane. So I just left the z-coordinate out and see what kind of ellipse you get when you project the 3D exponential circle on the 2D plane. Acually I did it with the 3D circular multiplication but that makes no difference only the cosines are now more easy to work with. Anyway the surprise was that I got the same ellipse back, so there is clearly a more deeper lying connection between these two spaces (the 3D circular numbers and these 2D complex multiplication defined by i^2 = -1 + i). A part of the story as why there is a connection between these spaces is of course found into looking at their eigenvalues. And they are the same although 3D complex numbers have of course 3 eigenvalues while the 2D numbers have two eigen values. A lot of people have never done the calculation but the complex plane has all kinds of complex numbers z that each have eigenvalues too… Anyway I felt that out of this post otherwise it would just become too long to read because all in all it’s now already 10 images. Seven images with math made with LaTex and three additional figures with sceenshots from the DESMOS graphical package. By the way it has nothing to do with this post but lately I did see a video where a guy claimed he calculated a lot of the Riemann zeta function zero’s with DESMOS. I was like WTF but it is indeed possible, you can only make a finite approximation and the guy used the first 200 terms of the Riemann zeta thing. At this point in time I have no idea what the next post will be about, may be it’s time for a new magnetism post or whatever what. We’ll wait and see, there will always pop something up because otherwise this would not be post number 254 or so. Well here is the stuff, I hope you like it or enjoy it.
That was it for this post, of course one of the reasons to write is that I could now file it under the two categories “3D complex numbers” and “2D multiplications” because we now have some connection going on here. And I also need some more posts related to 3D complex numbers because some time ago I found out that the total number of posts on magnetism would exceed those of the 3D complex numbers.
And we can’t have that of course, the goal of starting this website was to promote 3D complex numbers via offering all kinds of insights of how to look at them. The math professors had a big failure on that because about 150 years since Hamilton they shout that they can’t find the 3D complex numbers. Ok ok, they also want it as a field where any non-zero number is invertible and that shows they just don’t know what they are talking about. The 3D complex numbers are interesting simply because they have all those non-invertible numbers in them.
It is time to split my dear reader so we can both go our own way so I want to thank you for your attention.
There are many ways to introduce CR-equations for higher dimensional complex and circular numbers. For example you could remark that if you have a function, say f(X), defined on a higher dimensional number space, it’s Jacobian matrix should nicely follow the matrix representation of that particular higher dimensional number space. I didn’t do that, I tried to formulate in what I name CR-equations chain rule style. A long time ago and I did not remember what text it was but it was an old text from Riemann and it occured he wrote the equations also chain rule style. That was very refreshing to me and it showed also that I am still not 100% crazy…;) Even if you know nothing or almost nothing about say 3D complex numbers and you only have a bit of math knowledge about the complex plane, the way Riemann wrote it is very easy to understand. Say you have a function f(z) defined on the complex plane and as usual we write z = x + iy for the complex number, likely you know that the derivative f'(z) is found by a partial differentiation to the real variable x. But what happens if you take the partial differential to the variable y? That is how Rieman formulated it in that old text: you get f'(z) times i. And that is of course just a simple application of the chain rule that you know from the real line. And that is also the way I mostly wrote it because if you express it only in the diverse partial differentials, that is a lot of work in my Latex math typing environment and for you as a reader it is hard to read and understand what is going on. In the case of 3D complex or circular numbers you already have 9 partial differentials that fall apart into three groups of three differentials each. In this post I tried much more to hang on to how differentiation was orginally formulated, of course I don’t do it in the ways Newton and Leibniz did it with infitesimals and so on but in a good old limit. And in order to formulate it in limits I constantly need to divide by vectors from higher dimensional real spaces like 3D, 4D or now in the general case n-dimensional numbers. That should serve as an antidote to what a lot of math professors think: You cannot divide by a vector. Well may be they can’t but I can and I am very satisfied with it. Apperently for the math professors it is too difficult to define multiplications on higher dimensional spaces that do the trick. (Don’t try to do that with say Clifford algebra’s, they are indeed higher dimensional but as always professional math professors turn the stuff into crap and indeed on Clifford algebra’s you can’t divide most of the time.)
May be I should have given more examples or work them out a bit more but the text was already rather long. It is six pictures and picture size is 550×1100 so that is relatively long but I used a somehow larger font so it should read a bit faster.
Of course the most important feature of the CR-equations is that in case a function defined on a higher dimensional space obeys them, you can differentiate just like you do on the real line. Just like we say that on the complex plane the derivative of f(z) = z^2 is given by f'(z) = 2z. Basically all functions that are analytic on the real line can be expanded into arbitrary dimension, for example the sine and cosine funtions live in every dimension. Not that math professors have only an infitesimal amount of interest into stuff like that, but I like it. Here are the six pictures that compose this post, I hope it is comprihensible enough and more or less typo free:
Ok that was it, thanks for your attention and I hope that in some point in your future life you have some value to this kind of math.
On this entire website when I talked about a matrix representation it was always meant as a representation that mimics the multiplipaction on a particular space as say the 3D complex numbers. And making such matrices has the benefit you can apply all kinds of linear algebra like matrix diagonalization, or finding eigenvalues (the eigenvalue functions) and so on and so on. So the matrix representation was always the representation of a higher dimensional number. Big E is very different, this matrix describes the multiplication itself. As such it contains all possible products of two basis vectors and since this is supposed general theory I wrote it in the form of an nxn matrix. For people who like writing computer code, if you can implement this thing properly you can make all kinds of changes to the multiplication. As a matter of fact you can choose whatever you want the product of two basis vectors to be. So in that sense it is much more general as just the complex or the circular multiplication. I do not like writing computer code myself that much but I can perfectly understand people who like to write that. After all every now and then even I use programs like PARI and without people that like to write code such free programs are just not there. The math in this post is highly descriptive, it is the kind of math that I do not like most of the time but now I finally wrote this matrix down it was fun to do. If you are just interested in some fixed number space as say the 3D or 4D complex numbers, this concept of big E is not very useful. It is handy when you want to compare a lot of different multiplication in the same dimension and as such it could be a tool that comes in handy.
The entries of this matrix big E are the products of two basis vectors so this is very different from your usual matrix that often only contain real numbers or in more advanced cases complex numbers from the complex plane. I think it could lead to some trouble if you try to write code where the matrix entries are vectors, an alternative would be to represent big E as n square nxn matrices but that makes it a bit less overseeable.
May be in linear algebra you have seen so called quadratic forms using a symmetric matrix. It is a way to represent all quadratic polymonials in n variables. Big E looks a lot like that only you now have vectors as entries.
I did choose the number 1 to be the very first basis vector, so that would give the real line in that particular space. Of course one of the interesting details is that all analytic functions that you know from the real line can easily be extended to all other spaces you want. For example the sine and cosine or the exponential function live in all kinds of spaces in all kinds of dimensions. As such it is much much broader as only a sine on the real line and the complex plane. This post is five pictures long each 550×1100 pixels. I made them a bit larger because I use a larger font compared to a lot old posts. There are hardly mathematical results in this post because it is so descriptive. Compare it to the defenition of what a group is without many examples, the definition is often boring to read and only comes alive when you see good examples of the math involved.
If you want to try yourself and do a bit of complex analysis on higher dimensional spaces, ensure your big E matrix is symmetric. In that case the multiplication commutes, that is AB = BA always. If you also ensure all basis vectors are invertible you can find the so called Cauchy-Riemann equations for your particular multiplication. Once you have your set of CR equations you can differentiate all you want and also define line integrals (line integral are actually along a curve but that does not matter). A simple counter example would the 4D quaternions, they do not commute and as such it is not possible to conduct any meaningful complex analysis on the space of quaternions. End of this post and thanks for your attention.
This is one of those videos based in large part on that old theorem of Frobenius that says there are no 3D fields containing the complex numbers from the complex plane. There is another similar proof out, also rather old, that says the same. Well there is nothing new about that, after all there is nothing in the 3D complex numbers that squares up to -1. (See a few posts back when I wrote the first so called general theory where I show that in all odd-dimensional number systems you cannot solve for X^2 = -1). It is also not much of a secret that beside 0 there are much more non-invertible numbers in 3D real space. So no, it is not a field but I knew that for over 30 years my dear reader. Over the years I have joked at many occasions that the only thing math professors are good at is at saying “We cannot find the 3D complex numbers”. And that is so deep ingrained for some strange reason that actually they can’t. Ok ok there is also the question of competence into doing new math research. As a matter of fact most math professors are relatively bad at doing researh on new math stuff. Math professors are rather good at reproducing the good math from the past, in such a comparison they are much more like a classical orchestra that plays old works of music from Bach or any other long dead composer. But most professional musicians do not compose very music for themselves, it is their job to repeat the old music from the past and that’s it. Math professors are just like that often, now that is not only negative or so. For example the proof of the prime number theorem (how much primes are there asymptotically under a given magnitude?) was hard to find and it is long. If you can repeat that proof in front of a class, that is for sure a highly cognitive thing to do. I am not saying all math professors are dumb.
But as far as I know it most of them have a strong tendency for shallow thought, better fast than accurate and much slower. For example in 30 years of time every now and then some folks discover the 3D complex numbers again and oh oh what exiting it is to find the conjugate! They do it all wrong, yes one 100% take the wrong kind of conjugate and after that all calculations fade into total nonsense and as such they too conclude there is not much going on with 3D complex numbers. I have never seen anyone finding the 3D complex exponential, that is what I mean with shallow thinking. People just project on the new stuff things they know from the old stuff and the new stuff ‘must’ for some strange reason obey the old stuff and when that does not happen they begin to cry like the crybabies they are. I did that too on a few occasions but after doing stupid twice I understood what I was doing wrong: I just was not open minded about how the new stuff actually worked.
Now when I viewed the video for the first time a few weeks back I was relatively annoyed, not only because of the dumb content but also because of the title (3D complex numbers do not exist). But the days after when this new post came to my mind I just felt so tired. Why is it always so fucking stupid and shallow? Why is that year in year out why are all those math professors never evolving, why is it always fucking stupid and no oversight at all? So I did not have a good feeling about it and because the video is so stupid I could easily write 20 different posts and that made me feel even more tired and exhausted because these people just don’t improve. Anyway I decided to keep it short and simple and only point out that the two most important new numbers in the 3D complex numbers are the number tau and alpha that are related to the 3D complex exponential. These two numbers if you multiply them that gives zero, the property that non-zero numbers can multiply to zero is called divisors of zero.
And guess what, the video from Michael Penn begins with a small list of the properties that 3D numbers have to obey in case you will have something meaningful or so as Michael says it. Well in his small list is indeed it must be a field and also it does not have divisors of zero. Furthermore Michael did not figure it out himself but uses the work of another person so we have everything combined: Shallow thinking combined with repeating the work of other people.
Here is the video:
Left is placing the two pictures that compose this post here below. And I have an extra pdf with that Frobenius stuff in it, it is hard to read and is precisely the kind of math I avoid to write down. You can find the Forbenius classification from page 26 of the pdf.
For myself speaking: I consider this pdf a disaster to read. It is all overly complicated formulated with not much math content to it. But it contains a bit of the old school Frobenius stuff so it has some value although it has weird notation that is hard to grasp…
Over time I have come to understand that the 3D complex numbers are an expansion from the real line just like the complex plane is. For example: multiplication in the 3D complex numbers is ruled by the imaginary unit j who’s third power equals minus one: j^3 = -1. But if you have seen only the complex plane your entire life, in that case you are likely tempted to try and think of it as e to some imaginary power like on the complex plane. But you cannot do that because the number i from the complex plane does not live inside the 3D complex numbers. So for me they have equal right to exist or to study, they are very different but at least you can do complex analysis on the 3D complex numbers and you can’t do that on the famous quaternions.
Now I don’t want to sound as a sour old man, after all this post was not very funny to write but on the scale of things it is just a luxery problem. Let me end this post with thanking you for your attention.
Finally after all those years something of a more general approach to multiplication in higher dimensions? Yes but at the same time I remark you should not learn or study higher dimensional numbers that way. You better pick a particular space like 3D complex numbers and find a lot out about them and then move on to say 4D or 5D complex numbers and repeat that process. Problem with a more general approach is that those spaces are just too different from each other so it is hard to find some stuff all of those spaces have. It is like making theory for the complex plane and the split complex numbers at the same time: It is not a good idea because they behave very differently. The math in this post is utterly simple, basically I use only that the square of a real number, this time a determinant, cannot be negative. The most complicated thing I use of the rule that says the determinant of a square is the square of the determinant like in det(Z^2) = det(Z)^2.
This post is only 3.5 pictures long so I added some extra stuff like the number tau for the 4D complex numbers and my old proof from 2015 that on the space of 3D complex numbers you can’t solve X^2 = -1.
I hope it’s all a bit readable so here we go:
So all in all my goal was to use the impossibility of x^2 being negative on the real line to the more general setting of n-dimensional numbers. As such the math in this post is not very deep, it is as shallow as possible. Ok ok may be that 4D tau is some stuff that makes math professors see water burning because they only have the complex plane. Let me end this post with thanking you to make it till the end, you have endured weird looking robots without getting mentally ill! Congratulations! At the end a link to that old file from 2015:
This post is easy going for most people who have mastered the art of finding the inverse of a square matrix using the so called adjoint matrix. I was curious what happens to a 3D circular number if you take the conjugate ‘determinant style’ twice. In terms of standard linear algebra this is the same as taking the adjoint of the adjoint of a square matrix.
It is well known by now (I hope anyway) that 3D complex and circular numbers contain a set of numbers with a determinant of zero, you can’t find an inverse for them. To be precise, if you take some circular 3D number, say X, and you make some limit where you send X into a not invertible number, you know the inverse will blow up to infinity. But the conjugate ‘determinant style’ does not blow up, on the contrary in the previous post we observed that taking this kind of conjugate gave an extra zero eigenvalue in this conjugate.
In terms of linear algebra: If a square matrix is not invertible, it’s adjoint is ‘even more’ non invertible because a lot of the eigenvalues of that matrix turn to zero.
And although the inverse blows up to infinity, the cute result found is that it blows up in a very specific direction. After all it is the fact that the determinant goes to zero that blows the whole thing up, the conjugate ‘determinant style’ is as continuous as can be around zero…
It’s a miracle but the math is not that hard this time.
Four pictures for now and I plan on a small one picture addendum later. So lets go:
All in all this is not a deep math post but it was fun to look at anyway. May be a small appendix will be added later, so may be till updates inside this post or otherwise in some new post. Added Sunday 16 April: A small appendix where you can see what the adjoint taking process is doing with the eigenvalues of a 5×5 diagonal matrix. The appendix was just over one picture long so I had to spread it out over two pictures. You understand fast what the point is if you calculate a few of the determinants of those minor matrices. Remark here with a 5×5 matrix all such minors are 4×4 matrices so it is the standard setting and not like that advanced theorem of Pythagoras stuff. Well it all speaks for itself:
Ok, that was it for this update. Thanks for the attention and see you in another post.