On the degree of expansion columns & can the expansion go wrong? (Pythagoras, matrix version.)

To be honest this post is not carefully thought through. I felt like starting to write and at first I wanted to write it more into the direction of a proof based on the volumes of the parallelepiped and it’s expansion columns. But then I realized that my cute calculating scheme of turning a nxd matrix A into a square nxn matrix AP could go wrong. So I wanted to address that detail too but I hadn’t thought it out enough.

The answer to why it can go wrong is rather beautiful: Take any nxd matrix (of course the number of columns cannot exceed n because that is not a proper d-dimensional parallelepiped) say a 10×3 matrix. The three columns span a parallelepiped in 10-dimensional real space.
The ‘smallest’ parallelepiped that still has a three dimensional volume is one of the many minors possible in the 10×3 matrix. So with ‘smallest’ I mean the parallelepiped that uses the least number of coordinates in our 10-dimensional space.
Now in my calculating scheme, an algorithm if you want, I said you to start at the top of the matrix A and add more and more columns. But if A is made of just one 3×3 minor, say at the bottom of A, it is crystal clear my calculating scheme goes wrong because it now produces only zero columns.

And if that happens, when in the end you take the determinant of the square matix AP you get zero and of course that is wrong. These are exceptional cases, but it has to be addressed.


Of course there is no important reason for the calculation scheme to start at the top of the matrix, just start at the position of the lone 3×3 minor. In general: If you start with a nxd matrix, ensure your first expansion column is not a zero column. After that the next expansions all should go fine.

This post is five pictures long. If you haven’t mastered the calculation scheme of how to turn a non-square matrix into a square matrix you must first look up previous posts until you have a more or less clear mental picture of what is going on with this.

Ok, that was it for this post on the highly beautiful calculation scheme for the volumes of parallelepipeda in higher dimensions.

3 Video’s to kill the time & Unzicker’s horror on the quaternions…

To be honest I like the Unzicker guy; he is from Germany I believe and he alsways attacks the standard model for particles. According to him there are zillions of problems with the standard model and likely he is right with that. But he fully buys the crap that electrons must be magnetic dipoles without any experimental confirmation at all.
So that I post a video of him talking weird stuff about electrons is not a way to rediculize him. On the contrary, because he always tries to attack the idea’s inside the stadard model he in itself is a perfect example as why the physics community swallows all those weird explanations upon electron spin.

For myself speaking I think that electrons don’t have their spins ‘up’ or ‘down’. I don’t think that they are tiny magnets with two magnetic poles but in itself they are magnetic monopoles that come with only one magnetic charge… My estimate is that this magnetic charge is a permanent charge, that means there is no such thing as spin flip of an individual electron.

In the Unzicker video Alexander asks for help about differentiation on the quaternions or so. Well have I done my utmost best to craft all kinds of spaces where you can integate and differentiate, stuff like 3D complex numbers, 4D complex numbers etc, comes a weirdo along asking about the quaternions… On quaternions differentiating is a true horror and that is caused by the property that in general the quaternions don’t commute. I wrote a one picture long explanation for that. The problem is that differentiation on say the square function on the quaternions destroys information. That is why there is no so called ‘Complex analysis on the quaternions’, it just doesn’t exist.
Ok, lets go to the first video. It is not that very good because he constantly throws in a lot of terms like SO2 and SO3, but for an audience like physics people that is allowed of course.

Because it is still the year 2022, it is still one hundred years back that the Stern-Gerlach experiment was done. The next short video is relatively good in it’s kind; there are a lot of videos’s out there about the SG experiment and most are worse. In this video from some German at least there are some more explanation like it is not the Lorentz force because these are silver atoms. But as always in all explanations out there it misses as why exactly electrons do anti-align themselves with the applied external magnetic field.
For example water molecules are a tiny electric dipole, if you apply an electric field to clean water, all these tiny electric dipoles for 100% align with the electric field. So why do electrons not do that?

As always: electrons being magnetic monopoles is a far better explanation for what we observe. But all these physics people, one hundred percent of them have no problem at all when there is no experimental evidence that electrons are indeed ‘tiny magnets’. That is what I still don’t understand: Why don’t they see that their official explanations are not very logical when you start thinking on these explanations? Why this weird behavior?

Ok, lets hang in why differentiation on the quaternions is a total horror.

Hasta la vista baby!

The last video is a short interview with John Wheeler where he explains the concept of positrons being electrons that travel back in time. At some point John talks about an electron and positron meeting and anihilate each other. Well it has to be remarked that this doesn’t always happen. They can scatter too and why could that be? Well it fits with my simple model as electrons being magnetic monopoles. Positrons and electrons only kill each other if they have also the opposite magnetic charge…

Ok, that was it for this post. Thanks for your attention.

On the sine of a matrix minor against it parent matrix.

A long long time ago you likely learned how to calculate the sine in a rectanglular triangle. And that was something like the length of the opposite side devided by the length of the hypotenuse. But now we have those simple expressions for the volume of a non-square matrix, we can craft ‘sine like’ quotients for the minors of a matix against it’s parent matrix.
I took a simple 4×2 matrix so 4 rows and 2 columns, wrote out all six 2×2 minors and defined this sine like quotient for them. As far as I know this is one hundred percent useless knowledge, but it was fun to write anyway.
Likely also a long time ago you learned that if you square the sine and cosine of some angle, these squares add up to one. In this post I formulated it a little bit different because I want to make just one sine like quotient and not six ones that are hard to tell them apart. Anyway, you can view these sine like quotients as the shrinking factor if you project the parent matrix onto such a particular minor. With a projection of course you leave two rows out in you 4×2 parent matrix, or you make these rows zero. It is just what you want.
The parent 4×2 matrix A we use is just a two dimensional parallelogram that hangs in 4D space, so it’s “volume” is just an area. I skipped the fact that this area is the square root of 500. I also skipped calculating the six determinants of the minors, square them and add them up so we must get 500. But if you are new to this kind of matrix version of the good ol theorem of Pythagoras, you definitely must do that in order to gain some insight and a bit of confidence into how it all works and hangs together.

But this post is not about that, it only revolves around making these sine like quotients. And if you have these six quotients, if you square them and add them all up, the result is one.
Just like sin^2 + cos^2 = 1 on the real line.

Please notice that the way I define sine like quotients in this post has nothing to do with taking the sine of a square matrix. That is a very different subject and is not a “high school definition” of the sine quotient.
This post is just three pictures long, here we go:

So after all these years with only a bunch of variables in most matrices I show you, finally a matrix with just integer numbers in it… Now you have a bit of proof I can be as stupid as the average math professor…;)

But serious: The tiny fact all these squares of the six sines add up to one is some kind of idea that is perfectly equivalent to the Pythagoras expression as some sum of squares.
Thanks for your attention.

A detailed sketch of the full theorem of Pythagoras (that matrix version). Part 2 of 2.

The reason I name these two posts a sketch of (a proof) of the full theorem of Pythagoras is that I want to leave out all the hardcore technical details. After all it should be a bit readable and this is not a hardcore technical report or so. Beside this, making those extra matrix columns until you have a square matrix is so simple to understand: It is just not possible this method does not return the volume of those parallelepiped.
I added four more pictures to this sketch, that should cover more or less what I skipped in the previous post. For example we start with a non-square matrix A, turn it into a square matrix M and this matrix M always has a non-negative determinant. But in many introductionary courses on linear algebra you are thought that if you swap two columns in a matrix, the determinant changes sign. So why does this not happen here? Well if you swap two columns in the parallelepiped A, the newly added columns to make it a square matrix change too. So the determinant always keeps on returning the positive value of the volume of any parallelepiped A. (I never mentioned that all columns of A must be lineary independent, but that is logical we only look at stuff with a non-zero volume.)

Just an hour ago I found another pdf on the Pythagoras stuff and the author Melvin Fitting has also found the extension of the cross product to higher dimensions. At the end of this post you can view his pdf.

Now what all these proof of the diverse versions of Pythagorean theorems have in common is that you have some object, say a simplex or a parallelepiped, these proof always need the technical details that come with such an object. But a few posts back when I wrote on it for the first time I remarked that those projections of a parallelogram in 3D space always make it shrink by a constant factor. See the post ‘Is this the most simple proof ever?’ for the details. And there it simply worked for all objects you have as long as they are flat. May be in a next post we will work that out for the matrix version of the Pythagoras stuff.

Ok, four more pictures as a supplement to the first part. I am to lazy to repair it but this is picture number 8 and not number 12:

Ok, lets hope the pdf is downloadable:

That was it for this post. Thanks for your attention.

A detailed sketch of the full theorem of Pythagoras (that matrix version).Part 1 of 2.

For me it was a strange experience to construct a square matrix and if you take the determinant of the thing the lower dimensional volume of some other thing comes out. In this post I will calculate the length of a 3D vector using the determinant of a 3×3 matrix.
Now why was this a strange experience? Well from day one in the lectures on linear algebra you are thought that the determinant of say a 3×3 matrix always returns the 3D volume of the parallelepiped that is spanned by the three columns of that matrix.
But it is all relative easy to understand: Suppose I have some vector A in three dimensional space. I put this vector in the first column of a 3×3 matrix. After I add two more columns that are both perpendicular to each other and to A. After that I normalize the two added columns to one. And if I now take the determinant you get the length of the first column.
That calculation is actually an example below in the pictures.

Well you can argue that this is a horrible complicated way to calculate the length of a vector but it works with all parallelepiped in all dimensions. You can always make a non-square matrix, say an nxd matrix with n rows and d columns square. Such a nxd matrix can always be viewed as some parallelepiped if it doesn’t have too many columns. So d must never exceed n because that is not a parallelepiped thing.

Orthogonal matrices. In linear algebra the name orthogonal matrix is a bit strange. It is more then the columns being orthogonal to each other; the columns must also be normalized. Ok ok there are reasons that inside linear algebra it is named orthogonal because if you name it an orthonormal matrix it now more clear that norms must be one but then it is more vague that the columns are perpendicular. So an orthogonalnormalized matrix would be a better name but in itself that is a very strange word and people might thing weird things about math people.
Anyway loose from the historical development of linear algebra, I would like to introduce the concept of perpendicular matrices where the columns are not normalized but perpendicular to each other. In that post we will always have some non-square matrix A and we add perpendicular columns until we have a square matrix.

Another thing I would like to remark is that I always prefer to give some good examples and try not to be too technical. So I give a detailed example of a five dimensional vector and how to make a 5×5 matrix from that who’s determinant is the length of our starting 5D vector.
I hope that is much more readable compared to some highly technical writing that is hard to read in the first place and the key idea’s are hard to find because it is all so hardcore.

This small series of posts on the Pythagoras stuff was motivated by a pdf from Charles Frohman, you can find downloads in previous posts, and he proves what he names the ‘full’ theorem of Pythagoras via calculating the determinant of A^tA (here A^t represents the transpose of A) in terms of a bunch of minors of A.
A disadvantage of my method is that it is not that transparant as why we end up with that bunch of minors of A. On the other hand the adding of perpendicular columns is just so cute from the mathematical point of view that it is good to compose this post about it.

The post is eight pictures long so after a few days of thinking you can start to understand why this expansion of columns is say ‘mathematical beautiful’ where of course I will not define what ‘math beauty’ is because beauty is not a mathmatical thing. Here we go:

With ‘outer product’ I mean the 3D cross product.

Inside linear algebra you could also name this the theorem of the marching minors. But hey it is time to split and see you in the next post.

A visualization of the so called ‘full’ theorem of Pythagoras + a worked example in 4D space.

A few posts back I showed you that pdf written by Charles Frohman where he shows a bit of the diverse variants of the more general theorem of Pythagoras there is. At school you mostly learn only about the theorem for a triangle or a line segment and it never goes to anything else. But there is so much more, in the second half of this post I show you three vectors in 4D space that span a parallelepiped that is three dimensional. From the volume of such a thing you can also craft some form of Pythagorean theorem; that paralellepiped can be projected in four different ways and the squares of the four volumes you get equals the square of the original parallelepiped.
I would like to remark I hate that word ‘paralellepiped’, if you like me often work without any spell correction this is always a horrible word…;)

Now my son came just walking by, he reads the title of my post and he remarks: It sounds so negative or sarcastic this ‘full theorem’. And no no no I absolutely do not mean this in any form of negative way. On the contrary I reconmend you should at least download Charles his pdf because after all, it was better compared to what I wrote on the Pythagoras subject about 10 years ago.

But back to this post: What Charles names the full theorem of Pythagoras is likely that difficult looking matrix expression and from the outside it looks like you are in a complicated space if you want to prove that thing. The key observation is that all those minor matrices are actually projections of that n x k matrix you started with. So that is what the first part of this post is about.

The second part is about a weird thing I have heard more than once during my long lost student years: Outside the outer product of two vectors we have nothing in say 4D space that gives a vector perpendicular to what you started with. I always assumed they were joking because it is rather logical that if you want a unique one-dimensional normal vector in say 4D space, you must have a 3D dimensional thing to start with. That is what we will do in the second part: Given a triple of vectors (ABC) in four dimensional space, we construct a normal vector M to it. With this normal vector, after normalization of course, it gives now a handy way to calculate the volume of any of those paralellepiped things that hang out there.

Ok, lets go: Six pictures long but easily readable I hope.

All that is left is trying to find back that link to Charles his pdf.

That was it for this post. I hope you liked it, I surely liked the way you can calculate those paralellapipidedted kind of things. Thx for your attention and see you in the next post.

A simple theorem on the zero’s of polynomials on the space of 3D complex numbers.

In this post we look in detail at a very simple yet important polynomial namely

p(X) = X (X – 1).

Why does it have four zero’s in the space of 3D complex numbers? Well if you solve for the zero’s of p so try to solve p(X) = 0, that is you are looking for all numbers such that X^2 = X.
These numbers are their own square, on the real line or on the complex plane there are only two numbers that are their own square namely 0 and 1.
On the space of 3D complex numbers we also have an exponential circle and the midpoint of that circle is the famous number alpha. It is a cakewalk to calculate that alpha is it’s own square just like (1 – alpha).

This post is four pictures long in the size 550×825 pixels so it is not such a long read this time. In case you are not familiar with this number alpha, use the search function on this website and search for the post “Seven properties of the number alpha”. Of course since it is math you will also need a few days time of thinking the stuff out, after all the human brain is not very good at mathematics…
Well have fun reading it.

The last crazy calculation shows that a polynomial in it’s factor representation is not unique. Those zero’s at zeta one and zeta two are clearly different from 0 and 1 but they give rise to the same polynomial.

At last I want to remark that unlike on the complex plane there is no clean cut way to tell how many zero’s a given polymial will have. On the complex plane it is standard knowledge that an n-degree polynomial always has n roots (although these roots can all have the same value). But on the complex 3D numbers it is more like the situation on the real line. On the real line the polynomial p(x) = x^2 + 1 has no solution just like it has no solution on the space of 3D complex numbers.
That was it for this post, thanks for your attention.

Is this the most simple proof for a more general version of the theorem of Pythagoras? The inner product proof.

Last week I started thinking a bit about that second example from the pdf of Charles Frohman; the part where he projected a parallelogram on the three coordinate planes. And he gave a short calculation or proof that the sum of squares of the three projected areas is the square of the area of the original object.

In my own pdf I did a similar calculation in three dimensional space but that was with a pyramid or a simplex if you want. You can view that as three projections too although at the time I just calculated the areas and direved that 3D version of Pythagoras.

Within the hour I had a proof that was so amazingly simple that at first I laid it away to wait for another day or for the box for old paper to be recycled. But later I realized you can do this simple proof in all dimensions so although utterly simple it has absolutely some value.
The biggest disadvantage of proving more general versions of the theorem of Pythagoras and say use things like a simplex is that it soon becomes rather technical. And that makes it hard to read, those math formula’s become long can complex and it becomes harder to write it out in a transparant manner. After all you need the technicalities of your math object (say a simplex or a parallelogram) in order to show something is true for that mathematical object or shape.

The very simple proof just skips that all: It works for all shapes as long as they are flat. So it does not matter if in three dimensional real space you do these projections for a triangle, a square, a circle, a circle with an elleptical hole in it and so on and so on. So to focus the mind you can think of 3D space with some plane in it and on that plane is some kind of shape with a finite two dimensional area. If you project that on the three coordinate planes, that is the xy-plane, the yz and xz-plane, it has that Pythagoras kind of relation between the four areas.

I only wrote down the 3D version but you can do this in all dimensions. The only thing you must take in account is that you make your projections along just one coordinate axis. So in the seven dimensional real space you will have 7 of these projections that each are 6 dimensional…

This post is four pictures long, I did not include a picture explaining what those angles alpha and theta are inside one rectangular triangle. Shame on me for being lazy. Have fun reading it.

So all in all we can conclude the next: You can have any shape with a finite area and as long as it is flat it fits in a plane. And if that plane gets projeted on the three coordinate planes, the projected shapes will always obey the three dimensional theorem of Pythagoras.

Ok, thanks for your attention and although this inner product kind of proof is utterly simple, it still has some cute value to it.

Two pdf’s on more general versions of the theorem of Pythagoras.

A few months back I found a very good text on more general versions of the good old Pythagorean theorem. Since in the beginning of this text the author Charles Frohman did the same easy to understand calculations as I did a long time ago I more or less trust the entire document. But I did not check the end with those exterior calculations, I don’t know why but I dislike stuff like the wedge product.
The second pdf is from myself, likely I wrote it in 2012 because a proof of a more general version of the theorem of Pythagoras was the first math text I wrote again after many years. After that at the end of 2012 I began my investigations into the three dimensional complex numbers again and as such this website was needed in 2015.

Anyway I selected 3 details from these two pdf’s that I consider beautiful math ideas where of course I skip a definition of what ‘beautiful’ is. After all the property ‘mathematically beautiful’ is not a mathematical object but more a feeling in your brain.

Let me start with four pictures where I look into those 3 selected details, after that I will hang the two pdf texts into this post.

Below follow a few screenshots from the pdf’s:

The first pdf is from Charles Frohman. May be you must download it first before you can read it, I should gain more experience with this because the pdf format is such a hyper modern development…;)
The first text is from 2010:

At last my old text from 2012:

(Later I saw there were some old notes at the end of my old pdf, you can neglect that, it has nothing to do with the Pythagoras stuff.)

There is little use in comparing these texts, I only wanted to make a proof that uses natural induction so I could prove the theorem in all dimensions given the fact we have a proof (many proofs infact) for the theorem of Pythagoras with a rectangular triangle. Charles his text is more broader and the main piece is the proof for that determinant version of the theorem of Pythagoras.

At last a remark about the second detail of mathematical beauty: Charles gave the example of a parallellogram where the square of the area equals the sum of squares of the three projections on the three coordinate planes. I think you can take any shape, a square of a circle it does not matter. It only matters it is a flat thing in 3D space. After I found that within the hour I had a proof for the general setting of this problem in higher dimensional real space, may be this is for some future post.

For the time being let us split and go our own ways & thanks for the attention of reading this.

Two videos on electrons and the still missing magnetic monopole.

The first video is very simple, a bit on the high school plus level, but it is well made. The reason I post is that it has a very good explanation as why electrons are viewed as point particles. I had never heard of this explanation and it goes more or less like this:

If the electron had some kind of hard kernel, in that case if you shoot them fast enough into each other they will bounce differently.

This is based on the assumption that at low energies two colliding electrons will not touch each other. It seems that this kind of behaviour keeps on going on at high collision energies.
Another detail that is interesting are questions about the size of an electron. In this video a number like smaller then 10 to the minus 18 cm is mentioned. Since I think that if it is true that electrons are ‘tiny magnets’ so they are bipolar in the magnetic sense, they cannot be accelerated by magnetic fields in a significant manner.

I assume this is the diameter.

The physics professors think that a non constant magnetic field can accelerate an electron. Non constant can mean it varies over time, varies over space or both. If you apply a magnetic field to such a bipolar electrons, say if the north pole of the electron is repelled by that, the other side of the electron must feel an attractive force. The difference should account for the acceleration of the electron.
Lets do an easy calculation: Using the radius being this 10^-18 cm or 10^-20m, the density of an electron is about 2.2 times 10^29 kg per cubic meter of ‘electron stuff’. Suppose we have a ball shaped electron with a volume of one cubic meter thus it has a mass of 2.2 times 10^29 kg, it’s radius is about 60 cm.
So the diameter of our superlarge electron is about 120 cm and it has this rediculous huge mass. Do you think you can accelerate this thing with a magnetic field that has some nonzero gradient?

There are so many problems with the model of the electron being a magnetic dipole. Why should electrons ‘anti align’ themselves with an applied magnetic field? That is strange because they gain potential energy with that. That is just as strange and crazy as the next example:
You have a bunch of stones, one by one you grab them and hold them still in place. You let them loose. Some fall to the ground, the others fly up.
This never happens because nature has this tendency to lower the potential energy.
Another problem is that it is known that the electron pair is magnetically neutral. The ‘explanation’ is that the two electrons have opposite spin and ‘therefore’ cancel each other out. That is a stupid explanation because if it is true that the electron is a bipolar magnetic thing it should be magnetically neutral to begin with.

The second video is from Brian Keating, Brian is an experimental physics guy. This is one of those ‘Where are the magnetic monopoles’ videos that people who like to demenstrate they are dumb post on Youtube and the likes. It makes me wonder: What the hell are they doing with our taxpayer money? The concept of a magnetic monopole is just plain fucking stupid; it is a particle with no electric charge but only one of the two possible magnetic charges.
Why is this fucking stupid? Just look at the electron: if their fairy tales are true, the electron is an electric monopole and a magnetic dipole. If I would look to some dual version of an electron and have drunk lots of beer I would propose a particle that is an electric dipole but also a magnetic monopole.

You never hear those physics people talk about that, it is always that stupid talk of where are the magnetic monopoles or if there is just one magnetic monopole in every galaxy it is ok. What I consider the weirdest thing that if you advertise the electron as a magnetic dipole, should you not give a tiny bit of experimental validation for this? But no, Brian has no time for such considerations.

I should have included the text ‘This is fucking stupid’.

So where are all the magnetic monopoles? If my view on magnetism is correct they are in every electron pair that holds your body together.

May be in your body or your eyes?

Ok, lets leave this nonsense behind. Don’t forget people like this might be infuential but they are too stupid to understand only the smallest part of say three dimensional complex numbers.
End of this post.