Category Archives: Matrix representation

The number tau for the hyperbolic multiplication i^2 = -1 + 3i.

Some posts ago I showed you how you can calculate the number tau (always the logarithm of a suitable imaginary unit) using integrals for an elliptic multiplication. To be precise you can integrate the inverse of numbers along a path and that gives you the log. Just like on the real line if you start integrating in 1 and integrate 1/x you will get log(x). If you have read that post you know or remember those integrals look rather scary. And the method of using integrals is in it’s simplest on the 2D plane, in 3D real space those integrals are a lot harder to crack. And if the dimension is beyond 3 it gets worse and worse.
That is why many years ago I developed a method that would always work in all dimensions and that is using matrix diagonalization. If you want the log of an imaginary unit, you can diagonalize it’s matrix representation. And ok ok that too becomes a bit more cumbersome when the dimensions rise. I once calculated the number tau for seven dimensional circular numbers or if you want for 7D split complex numbers. As you might have observed for yourself: For a human such calculations are a pain in the ass because just the tiniest of mistakes lead to the wrong answer. It is just like multiplying two large numbers by hand with paper and pencil, one digit wrong and the whole thing is wrong.
Now we are going to calculate a log in a 2D space so wouldn’t it be handy if at least beforehand we know in what direction this log will go? After all a 2D real space is also known as a plane and in a plane we have vectors and stuff.

So for the very first tme after 12 years of not using it, I decided to include a very simple idea of a guy named Sophus Lie. When back in the year 2012 I decided to pick up my idea’s around higher complex numbers again of course I looked up if I could use anything from the past. And without doubt the math related to Sophus Lie was the most promising one because all other stuff was contaminated by those evil algebra people that at best use the square of an imaginary unit.
But I decided not to do it because yes indeed those Lie groups were smooth so it was related to differentiation but it also had weird stuff like the Lie bracket that I had no use for. Beside that in Lie groups and Lie algebra’s there are no Cauchy-Riemann equations. As such I just could not use it and I decided to go my own way.
Yet in this post I use a simple idea of Sophus Lie: If you differentiate the group at 1, that vector will point into the direction of the logarithm of the imaginary unit. It’s not a very deep math result but it is very helpful. Compare it to a screwdriver, a screwdriver is not a complicated machinery but it can be very useful in case you need to screw some screws…

Anyway for the mulitiplication in the complex plane ruled by
i^2 = -1 + 3i I used the method of matrix diagonalization to get the log of the imaginary unit i. So all in all it is very simple but I needed 8 pictures to pen it all down and also one extra picture know as Figure 1.

Figure1.

That was it for this post, we now have a number tau that is the logarithm of the imaginary unit i that rules the multiplication on this complex plane. The next post is about finding the parametrization for the hyperbole that has a determinant of 1 using this number tau.
As always thanks for your attention and see you in the next post.

Comparison of the conjugate on five different spaces.

To be a bit precise: I think two spaces are different if they have a different form of multiplication defined on them. Now everybody knows the conjugate, you have some complex number z = x + iy and the conjugate is given by z = x – iy. As such it is very simple to understand; real numbers stay the same under conjugation and if a complex numbers has an imaginary component, that gets flipped in the real axis.

But a long long time ago when I tried to find the conjugate for 3D complex numbers, this simple flip does not work. You only get advanced gibberish so I took a good deep look at it. And I found that the matrix representation of some complex z = x + iy number has an upper row that you can view as the conjugate. So I tried the upper row of my matrices for the 3D complex and circular numbers and voila instead of gibberish for the very first time I found what I at present day name the “Sphere-cone equation”.

I never gave it much thought anymore because it looked like problem solved and this works forever. But a couple of months ago when I discovered those elliptic and hyperbolic versions of 2D numbers, my solution of taking the upper row does not work. It does not work in the sense it produces gibberish so once more I had to find out why I was so utterly stupid one more time. At first I wanted to explain it via exponential curves or as we have them for 2D and 3D complex numbers: a circle that is the complex exponential. And of course what you want in you have some parametrization of that circle, taking the conjugate makes stuff run back in time. Take for example e^it in the standard complex plane where the multiplication is ruled by i^2 = -1. Of course you want the conjugate of
e^it to be e^-it or time running backwards.

But after that it dawned on me there is a more simple explanation that at the same time covers the explanation with complex exponentials (or exponential circles as I name them in low dimensions n = 2, 3). And that more simple thing is that taking the conjugate of any imaginary unit always gives you the inverse of that imaginary unit.

And finding the inverse of imaginary units in low dimensions like 2D or 3D complex numbers is very simple. An important reason as why I look into those elliptic complex 2D numbers lately is the cute fact that if you use the multiplication rule i^2 = -1 + i, in that case the third power is minus one: i^3 = -1. And you do not have to be a genius to find out that the inverse of this imaginary unit i is given by -i^2 .
If you use the idea of the conjugate is the inverse of imaginary units on those elliptic and hyperbolic version of the complex plane, if you multiply z against it’s conjugate you always get the determinant of the matrix representation.
For me this is a small but significant win over the professional math professors who like a broken vinyl record keep on barking out: “The norm of the product is the product of the norms”. Well no no overpaid weirdo’s, it’s always determinants. And because the determinant on the oridinary complex plane is given as x^2 + y^2, that is why the math professors bark their product norm song out for so long.

Anyway because I found this easy way of explaining I was able to cram in five different spaces in just seven images. Now for me it is very easy to jump in my mind from one space to the other but if you are a victim of the evil math professors you only know about the complex plane and may be some quaternion stuff but for the rest you mind is empty. That could cause you having a bit of trouble of jumping between spaces yourself because say 3D circular numbers are not something on the forefront of your brain tissue, in that case only look at what you understand and build upon that.

All that’s left for me to do is to hang in the seven images that make up the math kernel of this post. I made them a tiny bit higher this time, the sizes are 550×1250. A graph of the hyperbolic version of the complex exponential can be found at the seventh image. Have fun reading it and let me hope that you, just like me, have learned a bit from this conjugate stuff.
The picture text already starts wrong: It’s five spaces, not four…

At last I want to remark that the 2D hyperbolic complex numbers are beautiful to see. But why should that be a complex exponential while the split complex numbers from the overpaid math professors does not have a complex exponential?
Well that is because the determinant of the imaginary unit must be +1 and not -1 like we have for those split complex numbers from the overpaid math professors. Lets leave it with that and may I thank you for your attention if you are still awake by now.

2D elliptical and hyperbolic multiplications.

If you change the way the multiplication in the complex plane works, instead of a unit circle as the complex exponential you get ellipses and hyperbola. In this post I give a few examples, where usually the complex plane is ruled by i^2 = -1 we replace that by i^2 = -1 + i and i^2 = -1 + 3i.
In the complex plane the unit circle is often defined as the solution to the complex variable z multiplied against it’s conjugate and then solve where this product is one.
There is nothing wrong with that, only it leads to what is often told in class or college and that is: The norm of a product of two complex numbers is the product of the norms. And ok ok, on the complex plane this is true but in all other spaces the I equipped with a multiplication it was never true. It is the determinant that does all the work because after all on the complex plane the determinant of a matrix representation of the complex variable z is
x^2 + y^2. (Here as usual z = x + iy for real valued variables x and y.)

Therefore in this post we will solve for det(z) = 1 for the two modified multiplications we will look at. I did choose the two multiplications so that in both cases det(i) = 1. That has the property that if we multiply and z against i, the determinant stays the same; det(iz) = det(z).

I simply name complex z with integer x and y also integers, a more precise name would be Gaussian integers to distinguish them from the integers we use on the real line. Anyway I do not think it is confusing, it is rather logical to expect a point in the plane with integers coordinates to be an integer point or an integer 2D complex number z.

Beside the ellipses and hyperbola defined by det(z) = 1, or course there are many more as for example defined by det(z) = 3. Suppose we have some integer point or z on say det(z) = 3, if we multiply that z by i you stay on that curve. Furthermore such a point iz will always be an integer point to because after all the multiplication of integers is always an integer itself.
That is more or less the main result of this post; by multiplication with the modified imaginary unit i you hop through all other integer points of such an ellipse or hyperbole.
(By the way I use the word hyperbola to be the plural of hyperbole but I do not know if that is the ‘official’ plural for a hyperbole.)

What I found curious at first is the fact that expressions like z = -3 + 8i can have an integer inverse. But it has it’s own unavoidable logic: The 2×2 matrix representation contains only (real) integers and if the determinant is one, the inverse matrix will have no fractions whatsoever. The same goes for any square matrix with integer entries, if the determinant is one the inverse will also be a matrix with only integer entries.

This post is six pictures long, each size 550×1100 and three additional screen shots where I used the desmos graphics package for drawing ellipses and a hyperbole. At last I want to remark that I estimate these results as shown here are not new, the math community is investigating so called Diophantine equations (those are equations where you look for integer solutions) and as such a lot of people have likely found that there are simple linear relations between those integer solutions. Likely the only thing new here is that I modify the way the complex number i behaves as a square, as far as I know math folks never do that.
So let me try to upload the pictures and I hope you have fun reading it.

Funny detail: i^3 = -1.

Ok, that was it for this post. I hope you liked it and learned a bit of math from it. I do not have a good category for 2D numbers so I only file this under ‘matrix representations’ because those determinants do not fall from the sky. And file it under ‘uncategorized’.
Thanks for your attention and see you in a new post.

Addendum added 09 Dec 2023: I made a picture for the other website but since I made it already why not hang it in here too? See picture 05 above where we looked at when you get an elliptical multiplication and when the hyperbole version. In the picture below you see a rather weird complex exponential: a straight line. And the powers of i just hop over all those integer values on that line. The multiplication here is defined by i^2 = -1 + 2i. All positive powers hop to the left and upwards, the inverses go the other way. For example the inverse of i equals 2 – i.

Who would have thought that a complex exponential can be a line?

Ok, that was it for this post. Thanks for your attention.

General theory Part 2: On a matrix named big E.

On this entire website when I talked about a matrix representation it was always meant as a representation that mimics the multiplipaction on a particular space as say the 3D complex numbers. And making such matrices has the benefit you can apply all kinds of linear algebra like matrix diagonalization, or finding eigenvalues (the eigenvalue functions) and so on and so on. So the matrix representation was always the representation of a higher dimensional number.
Big E is very different, this matrix describes the multiplication itself. As such it contains all possible products of two basis vectors and since this is supposed general theory I wrote it in the form of an nxn matrix. For people who like writing computer code, if you can implement this thing properly you can make all kinds of changes to the multiplication. As a matter of fact you can choose whatever you want the product of two basis vectors to be. So in that sense it is much more general as just the complex or the circular multiplication.
I do not like writing computer code myself that much but I can perfectly understand people who like to write that. After all every now and then even I use programs like PARI and without people that like to write code such free programs are just not there.
The math in this post is highly descriptive, it is the kind of math that I do not like most of the time but now I finally wrote this matrix down it was fun to do. If you are just interested in some fixed number space as say the 3D or 4D complex numbers, this concept of big E is not very useful. It is handy when you want to compare a lot of different multiplication in the same dimension and as such it could be a tool that comes in handy.

The entries of this matrix big E are the products of two basis vectors so this is very different from your usual matrix that often only contain real numbers or in more advanced cases complex numbers from the complex plane. I think it could lead to some trouble if you try to write code where the matrix entries are vectors, an alternative would be to represent big E as n square nxn matrices but that makes it a bit less overseeable.

May be in linear algebra you have seen so called quadratic forms using a symmetric matrix. It is a way to represent all quadratic polymonials in n variables. Big E looks a lot like that only you now have vectors as entries.

I did choose the number 1 to be the very first basis vector, so that would give the real line in that particular space. Of course one of the interesting details is that all analytic functions that you know from the real line can easily be extended to all other spaces you want. For example the sine and cosine or the exponential function live in all kinds of spaces in all kinds of dimensions. As such it is much much broader as only a sine on the real line and the complex plane.
This post is five pictures long each 550×1100 pixels. I made them a bit larger because I use a larger font compared to a lot old posts. There are hardly mathematical results in this post because it is so descriptive. Compare it to the defenition of what a group is without many examples, the definition is often boring to read and only comes alive when you see good examples of the math involved.

If you want to try yourself and do a bit of complex analysis on higher dimensional spaces, ensure your big E matrix is symmetric. In that case the multiplication commutes, that is AB = BA always. If you also ensure all basis vectors are invertible you can find the so called Cauchy-Riemann equations for your particular multiplication. Once you have your set of CR equations you can differentiate all you want and also define line integrals (line integral are actually along a curve but that does not matter).
A simple counter example would the 4D quaternions, they do not commute and as such it is not possible to conduct any meaningful complex analysis on the space of quaternions.
End of this post and thanks for your attention.

Proof that Z^2 = -1 cannot be solved on real spaces with an odd dimension. (General theory part 1.)

Finally after all those years something of a more general approach to multiplication in higher dimensions? Yes but at the same time I remark you should not learn or study higher dimensional numbers that way. You better pick a particular space like 3D complex numbers and find a lot out about them and then move on to say 4D or 5D complex numbers and repeat that process.
Problem with a more general approach is that those spaces are just too different from each other so it is hard to find some stuff all of those spaces have. It is like making theory for the complex plane and the split complex numbers at the same time: It is not a good idea because they behave very differently.
The math in this post is utterly simple, basically I use only that the square of a real number, this time a determinant, cannot be negative. The most complicated thing I use of the rule that says the determinant of a square is the square of the determinant like in det(Z^2) = det(Z)^2.

This post is only 3.5 pictures long so I added some extra stuff like the number tau for the 4D complex numbers and my old proof from 2015 that on the space of 3D complex numbers you can’t solve X^2 = -1.

I hope it’s all a bit readable so here we go:

Oops, this is the circular multiplication… Well replace j^3 = 1 by
j^3 = -1 and do it yourself if you want to.

So all in all my goal was to use the impossibility of x^2 being negative on the real line to the more general setting of n-dimensional numbers. As such the math in this post is not very deep, it is as shallow as possible. Ok ok may be that 4D tau is some stuff that makes math professors see water burning because they only have the complex plane.
Let me end this post with thanking you to make it till the end, you have endured weird looking robots without getting mentally ill! Congratulations!
At the end a link to that old file from 2015:

What is the repeated conjugate ‘determinant style’? (Also: Repeated adjoint matrix.)

This post is easy going for most people who have mastered the art of finding the inverse of a square matrix using the so called adjoint matrix. I was curious what happens to a 3D circular number if you take the conjugate ‘determinant style’ twice. In terms of standard linear algebra this is the same as taking the adjoint of the adjoint of a square matrix.

It is well known by now (I hope anyway) that 3D complex and circular numbers contain a set of numbers with a determinant of zero, you can’t find an inverse for them. To be precise, if you take some circular 3D number, say X, and you make some limit where you send X into a not invertible number, you know the inverse will blow up to infinity.
But the conjugate ‘determinant style’ does not blow up, on the contrary in the previous post we observed that taking this kind of conjugate gave an extra zero eigenvalue in this conjugate.

In terms of linear algebra: If a square matrix is not invertible, it’s adjoint is ‘even more’ non invertible because a lot of the eigenvalues of that matrix turn to zero.

And although the inverse blows up to infinity, the cute result found is that it blows up in a very specific direction. After all it is the fact that the determinant goes to zero that blows the whole thing up, the conjugate ‘determinant style’ is as continuous as can be around zero…

It’s a miracle but the math is not that hard this time.

Four pictures for now and I plan on a small one picture addendum later.
So lets go:

Isn’t it cute? This infinity has a direction namely the number alpha…;)
Small correction: It should be taking the conjugate TWICE…

All in all this is not a deep math post but it was fun to look at anyway. May be a small appendix will be added later, so may be till updates inside this post or otherwise in some new post.
Added Sunday 16 April: A small appendix where you can see what the adjoint taking process is doing with the eigenvalues of a 5×5 diagonal matrix. The appendix was just over one picture long so I had to spread it out over two pictures. You understand fast what the point is if you calculate a few of the determinants of those minor matrices. Remark here with a 5×5 matrix all such minors are 4×4 matrices so it is the standard setting and not like that advanced theorem of Pythagoras stuff.
Well it all speaks for itself:

Ok, that was it for this update. Thanks for the attention and see you in another post.

Factorization of the determinant inside the space of 3D circular numbers. Aka: The conjugate ‘determinant style’.

A few weeks back I was thinking in writing finally some post about general theory for spaces with arbitrary dimension. It soon dawned on me that the first post should be about the impossibility of solving X^2 = -1 on spaces of odd dimension for both the complex and the circular method of mulitplication on those spaces. So post number one should be about the fact the famous number i does not exist in spaces with dimensions 1, 3, 5 etc.
And what about the second post? Well you can always factorize the determinant inside such spaces, that is a very interesting observation because the determinant is also the product of all eigenvalues. These eigenvalues live traditionally in the complex plane and as such a naive math professor could easily think that the determinant can only be factorized inside the complex plane. So that would be a reasonable post number two.
Since all these years I only did such a factorization once I decided to do it again and that is this post. The basic idea is very simple: If you want to find an expression for the inverse of a general 3D circular number, you need the determinant of that number. From that you can easily find a factorization of the determinant. It’s as simple as efficient.

But now I have repeated it in the space of 3D circular numbers I discovered that part of this factorization behaves very interesting when you restrict yourself to the subset of all 3D circular number that are not invertible. That is that taking the conjugate ‘determinant style’. The weird result is that taking this kind of a conjugate increases the number of eigenvalues that are zero. So this form of conjugation transports circular numbers with only one eigenvalue zero to the sub-space of numbers with two eigenvalues zero.

For years I have been avoiding writing general theory because I considered it better to take one space at a time and look at the details on just that one space. May be that still is the best way to go because now I have this new transporting detail for only what would be the second post of a general theory, it looks like it is very hard to prove such a thing in a general setting.

Luckily the math content of this post is not deep in the sense if you know how to find the inverse of a square matrix, you understand fast what is going on at the surface. But what happens at the level of non-invertibles is mind blowing: What the hell is going on there and is it possible to catch that into some form of general theory?

I tried to keep it short but all in all it grew to a nice patch of math that is 8 pictures long. Here is the stuff:

At the end of this post I want to remark that the quadratic behaviour of our conjugate ‘determinant style’ is caused by the fact it was done on a 3D space. If for example you are looking at 17 dimensional number, complex or circular, this method of taking a conjugate is a 16 degree beast in 17 variables. how to prove all non-invertible numbers get transported to more and more eigenvalues zero?

May be it is better to skip the whole idea of crafting a general theory once more and only look at the beautiful specifics of the individual spaces under consideration.

End of this post and thanks for your attention.

Solving the ‘Speed = The Square’ equation on four different spaces.

With ‘speed = square’ I simply mean that the speed is a vector made up of the square of where you are. The four spaces are:
1) The real line,
2) The complex plane (2D complex numbers),
3) The 3D circular numbers and
4) The 3D complex numbers.

I will write the solutions always as dependend on time, so on the real line a solution is written as x(t), on the complex plane as z(t) and on both 3D number spaces as X(t). And because it looks rather compact I also use the Newtonian dot notation for the derivative with respect to time. It has to be remarked that Newton often used this notation for natural objects with some kind of speed (didn’t he name it flux or so?).
Anyway this post has nothing to do with physics, here we just perform an interesting mathematical ecercise: We look at what happens when points always have a speed that is the square of their position.

On every space I give only one solution, that is a curve with a specific initital value, mostly the first imaginary component on that space. Of course on the real line the initial condition must be a real number because it lacks imaginary stuff.

If you go through the seven pictures of this post, ask in the back of your mind question as why is this all working? Well that is because the time domains we are using are made of real numbers and, that is important, the real line is also a part of the complex and circular number systems.
The other way you can argue that the geometric series stuff we use can also be extended from the real line to the three other spaces. To be precise: we don’t use the geometric series but the fractional function that represents it.

Ok, lets go to the seven pictures:

That Newton dot notation just looks so cute…
The words ‘Analytic continuation’ are not completely correct…

Remark: This post is not deep mathematics or so. We start every time with a function we know that if you differentiate it you will get the square. After that we look at it’s coordinate functions and shout in bewilderment: Wow that gives the square, it is a God given miracle!

No these are not God given miracles but I did an internet search on the next phrase of Latex code: \dot{z} = z^2. To my surprise nothing of interest popped up in the Google search results. So I wonder if this is just one more case of low hanging math fruits that are not plucked by math professors? Who knows?

End of this post, thanks for your attention.

A detailed sketch of the full theorem of Pythagoras (that matrix version). Part 2 of 2.

The reason I name these two posts a sketch of (a proof) of the full theorem of Pythagoras is that I want to leave out all the hardcore technical details. After all it should be a bit readable and this is not a hardcore technical report or so. Beside this, making those extra matrix columns until you have a square matrix is so simple to understand: It is just not possible this method does not return the volume of those parallelepiped.
I added four more pictures to this sketch, that should cover more or less what I skipped in the previous post. For example we start with a non-square matrix A, turn it into a square matrix M and this matrix M always has a non-negative determinant. But in many introductionary courses on linear algebra you are thought that if you swap two columns in a matrix, the determinant changes sign. So why does this not happen here? Well if you swap two columns in the parallelepiped A, the newly added columns to make it a square matrix change too. So the determinant always keeps on returning the positive value of the volume of any parallelepiped A. (I never mentioned that all columns of A must be lineary independent, but that is logical we only look at stuff with a non-zero volume.)

Just an hour ago I found another pdf on the Pythagoras stuff and the author Melvin Fitting has also found the extension of the cross product to higher dimensions. At the end of this post you can view his pdf.

Now what all these proof of the diverse versions of Pythagorean theorems have in common is that you have some object, say a simplex or a parallelepiped, these proof always need the technical details that come with such an object. But a few posts back when I wrote on it for the first time I remarked that those projections of a parallelogram in 3D space always make it shrink by a constant factor. See the post ‘Is this the most simple proof ever?’ for the details. And there it simply worked for all objects you have as long as they are flat. May be in a next post we will work that out for the matrix version of the Pythagoras stuff.

Ok, four more pictures as a supplement to the first part. I am to lazy to repair it but this is picture number 8 and not number 12:

Ok, lets hope the pdf is downloadable:

That was it for this post. Thanks for your attention.

A detailed sketch of the full theorem of Pythagoras (that matrix version).Part 1 of 2.

For me it was a strange experience to construct a square matrix and if you take the determinant of the thing the lower dimensional volume of some other thing comes out. In this post I will calculate the length of a 3D vector using the determinant of a 3×3 matrix.
Now why was this a strange experience? Well from day one in the lectures on linear algebra you are thought that the determinant of say a 3×3 matrix always returns the 3D volume of the parallelepiped that is spanned by the three columns of that matrix.
But it is all relative easy to understand: Suppose I have some vector A in three dimensional space. I put this vector in the first column of a 3×3 matrix. After I add two more columns that are both perpendicular to each other and to A. After that I normalize the two added columns to one. And if I now take the determinant you get the length of the first column.
That calculation is actually an example below in the pictures.

Well you can argue that this is a horrible complicated way to calculate the length of a vector but it works with all parallelepiped in all dimensions. You can always make a non-square matrix, say an nxd matrix with n rows and d columns square. Such a nxd matrix can always be viewed as some parallelepiped if it doesn’t have too many columns. So d must never exceed n because that is not a parallelepiped thing.

Orthogonal matrices. In linear algebra the name orthogonal matrix is a bit strange. It is more then the columns being orthogonal to each other; the columns must also be normalized. Ok ok there are reasons that inside linear algebra it is named orthogonal because if you name it an orthonormal matrix it now more clear that norms must be one but then it is more vague that the columns are perpendicular. So an orthogonalnormalized matrix would be a better name but in itself that is a very strange word and people might thing weird things about math people.
Anyway loose from the historical development of linear algebra, I would like to introduce the concept of perpendicular matrices where the columns are not normalized but perpendicular to each other. In that post we will always have some non-square matrix A and we add perpendicular columns until we have a square matrix.

Another thing I would like to remark is that I always prefer to give some good examples and try not to be too technical. So I give a detailed example of a five dimensional vector and how to make a 5×5 matrix from that who’s determinant is the length of our starting 5D vector.
I hope that is much more readable compared to some highly technical writing that is hard to read in the first place and the key idea’s are hard to find because it is all so hardcore.

This small series of posts on the Pythagoras stuff was motivated by a pdf from Charles Frohman, you can find downloads in previous posts, and he proves what he names the ‘full’ theorem of Pythagoras via calculating the determinant of A^tA (here A^t represents the transpose of A) in terms of a bunch of minors of A.
A disadvantage of my method is that it is not that transparant as why we end up with that bunch of minors of A. On the other hand the adding of perpendicular columns is just so cute from the mathematical point of view that it is good to compose this post about it.

The post is eight pictures long so after a few days of thinking you can start to understand why this expansion of columns is say ‘mathematical beautiful’ where of course I will not define what ‘math beauty’ is because beauty is not a mathmatical thing. Here we go:

With ‘outer product’ I mean the 3D cross product.

Inside linear algebra you could also name this the theorem of the marching minors. But hey it is time to split and see you in the next post.