Category Archives: Pythagoras stuff

2 Vids: One good on Pythagoras and a terrible bad one (on an impossible problem if you choose your space so fucking stupid)…

The last week I have been stuck in a writers block. I started writing on another post on the Pythagoras matrix version stuff and I just don’t know how to proceed. On the one hand I want to tell a cute so called ‘weird root formula’ while I want to avoid using that +/- scheme that is difficult to explain and has nothing to do with what I wanted to say.

May be I put the stuff I don’t want to talk about in a separate appendix or so. That sounded like a good idea two weeks ago but I’m still not working on it…

But it is about time for a new post and because my own Pythagoras stuff is glued to my writers block, the mathologer had a new video out about Pythagoras stuff the mathologer’s way. If you have seen vids from the mathologer before, you know he likes visual proofs to back up the math involved. The mathologer works very differently from me, I got hooked up to that matrix version earlier this year while he is doing stuff like trithagoras in a very cute manner.

Just a picture to kill the time.

The video is about half an hour long, just pick what you like or what you need and move on to other things. Digesting all stuff in a video from the mathologer always takes more time as the video is long! So you are not looking at just another insignificant idiot like me…;)

Take your time, it’s worth it.

The next video is from Michael Penn. Now Michael has that typical American attitude of putting out one video a day. Often they are not bad if you take into consideration this must be done every 24 hours. But his treatment of (x + y)^n = x^n + y^n is just horrible. If you look like Michael for solutions on the real line it is easy to understand this cannot work.
Yet last year all my counter examples to the last theorem of Pierre de Fermat were always based on this (x + y)^n = x^n + y^n equation.
Take for example the natural numbers modulo 35.
In this simple case we already have a cute counter example to the last theorem of Pierre de Fermate, namely:
12^n = 5^n + 7^n mod 35.

Why did Michael choose this fucking stupid space of real numbers for this equation that is the basis of a lot of counter examples to the last theorem of Pierre de Fermat? May be the speed of new videos is a thing here.

Ok, that was it for this post. See you around…

Calculating the determinant of a 4×4 matrix using 2×2 minors. What is the +/- pattern in this case?

I remember that in the past a few times I tried to write determinants of say a nxn matrix in determinants of blocks of that matrix. It always failed but now I understand the way the matrix version of the theorem of Pythagoras goes, all of a sudden it is a piece of cake.
In this post I only give an example of a 4×4 matrix. If you take the first two columns of such a matrix, say AB, this is now a 4×2 matrix with six 2×2 minors in it.
If we name the last two columns as CD, for every 2×2 minor in AB there is a corresponding complementary 2×2 minor in CD. For example if we pick the left upper 2×2 minor in our matrix ABCD, it’s complement is the right lower 2×2 minor at the bottom of CD.
If you take the determinants of those minors and multiply them against the determinants of their complements, add it all up with a suitable +/- pattern and voila: that must be the determinant of the whole 4×4 matrix.

This method could more or less easily expanded to larger matrices, but I think it is hard to prove the +/- pattern you need for the minors of larger matrices. Because I am such a dumb person I expected that half of my six 2×2 minors pick up a minus sign and the other half a plus sign. Just like you have when you develop a 4×4 determinant along a row or column of that matix. I was wrong, it is a bit more subtle once more confirming I am a very very dumb person.

I skipped giving you the alternative way of calculating determinants: The determinant is also the sum of so called signed permutations on the indices of their entries. If you have never seen that I advice you to look it up on the internet.

Because I skipped excisting knowledge widely available already, I was able to do the calculation in just four images! So it is a short post. Ok ok I also left out how I did it in detail because writing out a 4×4 determinant already has 24 terms with each four factors. That’s why it is only four pictures long in this post…

(I later replaced the above picture because it had a serious typo in it.)

If you want you can also use that expression as the determinant as a sum of signed permutations. It is a very cute formula. Wiki title:
Leibniz formula for determinants.
And a four minute video, it starts a bit slow but the guy manages to put in most of the important details in just four minutes:

Ok, that was it for this post. I filed it under the category ‘Pythagoras stuff’ because I tried similar stuff in the past but only with the knowledge into the matrix version of the Pythagoras theorem, it all becomes a bit more easy to do.

Thanks for your attention.

On the degree of expansion columns & can the expansion go wrong? (Pythagoras, matrix version.)

To be honest this post is not carefully thought through. I felt like starting to write and at first I wanted to write it more into the direction of a proof based on the volumes of the parallelepiped and it’s expansion columns. But then I realized that my cute calculating scheme of turning a nxd matrix A into a square nxn matrix AP could go wrong. So I wanted to address that detail too but I hadn’t thought it out enough.

The answer to why it can go wrong is rather beautiful: Take any nxd matrix (of course the number of columns cannot exceed n because that is not a proper d-dimensional parallelepiped) say a 10×3 matrix. The three columns span a parallelepiped in 10-dimensional real space.
The ‘smallest’ parallelepiped that still has a three dimensional volume is one of the many minors possible in the 10×3 matrix. So with ‘smallest’ I mean the parallelepiped that uses the least number of coordinates in our 10-dimensional space.
Now in my calculating scheme, an algorithm if you want, I said you to start at the top of the matrix A and add more and more columns. But if A is made of just one 3×3 minor, say at the bottom of A, it is crystal clear my calculating scheme goes wrong because it now produces only zero columns.

And if that happens, when in the end you take the determinant of the square matix AP you get zero and of course that is wrong. These are exceptional cases, but it has to be addressed.


Of course there is no important reason for the calculation scheme to start at the top of the matrix, just start at the position of the lone 3×3 minor. In general: If you start with a nxd matrix, ensure your first expansion column is not a zero column. After that the next expansions all should go fine.

This post is five pictures long. If you haven’t mastered the calculation scheme of how to turn a non-square matrix into a square matrix you must first look up previous posts until you have a more or less clear mental picture of what is going on with this.

Ok, that was it for this post on the highly beautiful calculation scheme for the volumes of parallelepipeda in higher dimensions.

On the sine of a matrix minor against it parent matrix.

A long long time ago you likely learned how to calculate the sine in a rectanglular triangle. And that was something like the length of the opposite side devided by the length of the hypotenuse. But now we have those simple expressions for the volume of a non-square matrix, we can craft ‘sine like’ quotients for the minors of a matix against it’s parent matrix.
I took a simple 4×2 matrix so 4 rows and 2 columns, wrote out all six 2×2 minors and defined this sine like quotient for them. As far as I know this is one hundred percent useless knowledge, but it was fun to write anyway.
Likely also a long time ago you learned that if you square the sine and cosine of some angle, these squares add up to one. In this post I formulated it a little bit different because I want to make just one sine like quotient and not six ones that are hard to tell them apart. Anyway, you can view these sine like quotients as the shrinking factor if you project the parent matrix onto such a particular minor. With a projection of course you leave two rows out in you 4×2 parent matrix, or you make these rows zero. It is just what you want.
The parent 4×2 matrix A we use is just a two dimensional parallelogram that hangs in 4D space, so it’s “volume” is just an area. I skipped the fact that this area is the square root of 500. I also skipped calculating the six determinants of the minors, square them and add them up so we must get 500. But if you are new to this kind of matrix version of the good ol theorem of Pythagoras, you definitely must do that in order to gain some insight and a bit of confidence into how it all works and hangs together.

But this post is not about that, it only revolves around making these sine like quotients. And if you have these six quotients, if you square them and add them all up, the result is one.
Just like sin^2 + cos^2 = 1 on the real line.

Please notice that the way I define sine like quotients in this post has nothing to do with taking the sine of a square matrix. That is a very different subject and is not a “high school definition” of the sine quotient.
This post is just three pictures long, here we go:

So after all these years with only a bunch of variables in most matrices I show you, finally a matrix with just integer numbers in it… Now you have a bit of proof I can be as stupid as the average math professor…;)

But serious: The tiny fact all these squares of the six sines add up to one is some kind of idea that is perfectly equivalent to the Pythagoras expression as some sum of squares.
Thanks for your attention.

A detailed sketch of the full theorem of Pythagoras (that matrix version). Part 2 of 2.

The reason I name these two posts a sketch of (a proof) of the full theorem of Pythagoras is that I want to leave out all the hardcore technical details. After all it should be a bit readable and this is not a hardcore technical report or so. Beside this, making those extra matrix columns until you have a square matrix is so simple to understand: It is just not possible this method does not return the volume of those parallelepiped.
I added four more pictures to this sketch, that should cover more or less what I skipped in the previous post. For example we start with a non-square matrix A, turn it into a square matrix M and this matrix M always has a non-negative determinant. But in many introductionary courses on linear algebra you are thought that if you swap two columns in a matrix, the determinant changes sign. So why does this not happen here? Well if you swap two columns in the parallelepiped A, the newly added columns to make it a square matrix change too. So the determinant always keeps on returning the positive value of the volume of any parallelepiped A. (I never mentioned that all columns of A must be lineary independent, but that is logical we only look at stuff with a non-zero volume.)

Just an hour ago I found another pdf on the Pythagoras stuff and the author Melvin Fitting has also found the extension of the cross product to higher dimensions. At the end of this post you can view his pdf.

Now what all these proof of the diverse versions of Pythagorean theorems have in common is that you have some object, say a simplex or a parallelepiped, these proof always need the technical details that come with such an object. But a few posts back when I wrote on it for the first time I remarked that those projections of a parallelogram in 3D space always make it shrink by a constant factor. See the post ‘Is this the most simple proof ever?’ for the details. And there it simply worked for all objects you have as long as they are flat. May be in a next post we will work that out for the matrix version of the Pythagoras stuff.

Ok, four more pictures as a supplement to the first part. I am to lazy to repair it but this is picture number 8 and not number 12:

Ok, lets hope the pdf is downloadable:

That was it for this post. Thanks for your attention.

A detailed sketch of the full theorem of Pythagoras (that matrix version).Part 1 of 2.

For me it was a strange experience to construct a square matrix and if you take the determinant of the thing the lower dimensional volume of some other thing comes out. In this post I will calculate the length of a 3D vector using the determinant of a 3×3 matrix.
Now why was this a strange experience? Well from day one in the lectures on linear algebra you are thought that the determinant of say a 3×3 matrix always returns the 3D volume of the parallelepiped that is spanned by the three columns of that matrix.
But it is all relative easy to understand: Suppose I have some vector A in three dimensional space. I put this vector in the first column of a 3×3 matrix. After I add two more columns that are both perpendicular to each other and to A. After that I normalize the two added columns to one. And if I now take the determinant you get the length of the first column.
That calculation is actually an example below in the pictures.

Well you can argue that this is a horrible complicated way to calculate the length of a vector but it works with all parallelepiped in all dimensions. You can always make a non-square matrix, say an nxd matrix with n rows and d columns square. Such a nxd matrix can always be viewed as some parallelepiped if it doesn’t have too many columns. So d must never exceed n because that is not a parallelepiped thing.

Orthogonal matrices. In linear algebra the name orthogonal matrix is a bit strange. It is more then the columns being orthogonal to each other; the columns must also be normalized. Ok ok there are reasons that inside linear algebra it is named orthogonal because if you name it an orthonormal matrix it now more clear that norms must be one but then it is more vague that the columns are perpendicular. So an orthogonalnormalized matrix would be a better name but in itself that is a very strange word and people might thing weird things about math people.
Anyway loose from the historical development of linear algebra, I would like to introduce the concept of perpendicular matrices where the columns are not normalized but perpendicular to each other. In that post we will always have some non-square matrix A and we add perpendicular columns until we have a square matrix.

Another thing I would like to remark is that I always prefer to give some good examples and try not to be too technical. So I give a detailed example of a five dimensional vector and how to make a 5×5 matrix from that who’s determinant is the length of our starting 5D vector.
I hope that is much more readable compared to some highly technical writing that is hard to read in the first place and the key idea’s are hard to find because it is all so hardcore.

This small series of posts on the Pythagoras stuff was motivated by a pdf from Charles Frohman, you can find downloads in previous posts, and he proves what he names the ‘full’ theorem of Pythagoras via calculating the determinant of A^tA (here A^t represents the transpose of A) in terms of a bunch of minors of A.
A disadvantage of my method is that it is not that transparant as why we end up with that bunch of minors of A. On the other hand the adding of perpendicular columns is just so cute from the mathematical point of view that it is good to compose this post about it.

The post is eight pictures long so after a few days of thinking you can start to understand why this expansion of columns is say ‘mathematical beautiful’ where of course I will not define what ‘math beauty’ is because beauty is not a mathmatical thing. Here we go:

With ‘outer product’ I mean the 3D cross product.

Inside linear algebra you could also name this the theorem of the marching minors. But hey it is time to split and see you in the next post.

A visualization of the so called ‘full’ theorem of Pythagoras + a worked example in 4D space.

A few posts back I showed you that pdf written by Charles Frohman where he shows a bit of the diverse variants of the more general theorem of Pythagoras there is. At school you mostly learn only about the theorem for a triangle or a line segment and it never goes to anything else. But there is so much more, in the second half of this post I show you three vectors in 4D space that span a parallelepiped that is three dimensional. From the volume of such a thing you can also craft some form of Pythagorean theorem; that paralellepiped can be projected in four different ways and the squares of the four volumes you get equals the square of the original parallelepiped.
I would like to remark I hate that word ‘paralellepiped’, if you like me often work without any spell correction this is always a horrible word…;)

Now my son came just walking by, he reads the title of my post and he remarks: It sounds so negative or sarcastic this ‘full theorem’. And no no no I absolutely do not mean this in any form of negative way. On the contrary I reconmend you should at least download Charles his pdf because after all, it was better compared to what I wrote on the Pythagoras subject about 10 years ago.

But back to this post: What Charles names the full theorem of Pythagoras is likely that difficult looking matrix expression and from the outside it looks like you are in a complicated space if you want to prove that thing. The key observation is that all those minor matrices are actually projections of that n x k matrix you started with. So that is what the first part of this post is about.

The second part is about a weird thing I have heard more than once during my long lost student years: Outside the outer product of two vectors we have nothing in say 4D space that gives a vector perpendicular to what you started with. I always assumed they were joking because it is rather logical that if you want a unique one-dimensional normal vector in say 4D space, you must have a 3D dimensional thing to start with. That is what we will do in the second part: Given a triple of vectors (ABC) in four dimensional space, we construct a normal vector M to it. With this normal vector, after normalization of course, it gives now a handy way to calculate the volume of any of those paralellepiped things that hang out there.

Ok, lets go: Six pictures long but easily readable I hope.

All that is left is trying to find back that link to Charles his pdf.

That was it for this post. I hope you liked it, I surely liked the way you can calculate those paralellapipidedted kind of things. Thx for your attention and see you in the next post.

Is this the most simple proof for a more general version of the theorem of Pythagoras? The inner product proof.

Last week I started thinking a bit about that second example from the pdf of Charles Frohman; the part where he projected a parallelogram on the three coordinate planes. And he gave a short calculation or proof that the sum of squares of the three projected areas is the square of the area of the original object.

In my own pdf I did a similar calculation in three dimensional space but that was with a pyramid or a simplex if you want. You can view that as three projections too although at the time I just calculated the areas and direved that 3D version of Pythagoras.

Within the hour I had a proof that was so amazingly simple that at first I laid it away to wait for another day or for the box for old paper to be recycled. But later I realized you can do this simple proof in all dimensions so although utterly simple it has absolutely some value.
The biggest disadvantage of proving more general versions of the theorem of Pythagoras and say use things like a simplex is that it soon becomes rather technical. And that makes it hard to read, those math formula’s become long can complex and it becomes harder to write it out in a transparant manner. After all you need the technicalities of your math object (say a simplex or a parallelogram) in order to show something is true for that mathematical object or shape.

The very simple proof just skips that all: It works for all shapes as long as they are flat. So it does not matter if in three dimensional real space you do these projections for a triangle, a square, a circle, a circle with an elleptical hole in it and so on and so on. So to focus the mind you can think of 3D space with some plane in it and on that plane is some kind of shape with a finite two dimensional area. If you project that on the three coordinate planes, that is the xy-plane, the yz and xz-plane, it has that Pythagoras kind of relation between the four areas.

I only wrote down the 3D version but you can do this in all dimensions. The only thing you must take in account is that you make your projections along just one coordinate axis. So in the seven dimensional real space you will have 7 of these projections that each are 6 dimensional…

This post is four pictures long, I did not include a picture explaining what those angles alpha and theta are inside one rectangular triangle. Shame on me for being lazy. Have fun reading it.

So all in all we can conclude the next: You can have any shape with a finite area and as long as it is flat it fits in a plane. And if that plane gets projeted on the three coordinate planes, the projected shapes will always obey the three dimensional theorem of Pythagoras.

Ok, thanks for your attention and although this inner product kind of proof is utterly simple, it still has some cute value to it.

Two pdf’s on more general versions of the theorem of Pythagoras.

A few months back I found a very good text on more general versions of the good old Pythagorean theorem. Since in the beginning of this text the author Charles Frohman did the same easy to understand calculations as I did a long time ago I more or less trust the entire document. But I did not check the end with those exterior calculations, I don’t know why but I dislike stuff like the wedge product.
The second pdf is from myself, likely I wrote it in 2012 because a proof of a more general version of the theorem of Pythagoras was the first math text I wrote again after many years. After that at the end of 2012 I began my investigations into the three dimensional complex numbers again and as such this website was needed in 2015.

Anyway I selected 3 details from these two pdf’s that I consider beautiful math ideas where of course I skip a definition of what ‘beautiful’ is. After all the property ‘mathematically beautiful’ is not a mathematical object but more a feeling in your brain.

Let me start with four pictures where I look into those 3 selected details, after that I will hang the two pdf texts into this post.

Below follow a few screenshots from the pdf’s:

The first pdf is from Charles Frohman. May be you must download it first before you can read it, I should gain more experience with this because the pdf format is such a hyper modern development…;)
The first text is from 2010:

At last my old text from 2012:

(Later I saw there were some old notes at the end of my old pdf, you can neglect that, it has nothing to do with the Pythagoras stuff.)

There is little use in comparing these texts, I only wanted to make a proof that uses natural induction so I could prove the theorem in all dimensions given the fact we have a proof (many proofs infact) for the theorem of Pythagoras with a rectangular triangle. Charles his text is more broader and the main piece is the proof for that determinant version of the theorem of Pythagoras.

At last a remark about the second detail of mathematical beauty: Charles gave the example of a parallellogram where the square of the area equals the sum of squares of the three projections on the three coordinate planes. I think you can take any shape, a square of a circle it does not matter. It only matters it is a flat thing in 3D space. After I found that within the hour I had a proof for the general setting of this problem in higher dimensional real space, may be this is for some future post.

For the time being let us split and go our own ways & thanks for the attention of reading this.

Impending Nobel prize & recycled Pythagoras theorem & it’s ‘inverse’.

Tomorrow is the new Noble prize in physics out, actually it is already past midnight as I type these words so it is actually today. But anyway. I am very curious if this year 2020 the Nobel prize in physics will once more go to what I name those ‘electron idiots’. An electron idiot is a person that just keeps on telling that electrons are magnetic dipoles because of something retarded like the Pauli matrices. May be idiot is a too harsh word, I think that a lot of that kind of behavior or ideas that can’t be true simply stay inside science because people want to belong to a group. In this case if you tell the official wisdom of electron spin you simply show that you belong to the group of physics people. And because people want to belong to a particular group they often show conformistic behavior, when it comes to that there is very little difference between a science like physics or your run of the mill religion.

In this post I would like to share a simple experiment that every body can do, it does not blow off one of your arms it is totally safe, and shows that those Pauli matrices are a very weird pipe dream. Here we go:

The official explanation of the Stren Gerlach experiment always contains the next: If electron spin is measured into a particular direction, say the vertical direction, if later you measure it again in a direction perpendicular on the vertical once more it has 50/50 probability. So if it is measured vertically and say it was spin up, if you after that measure it in say a horzontal manner once more the beam should split according to the 50/50 rule.

Ok, the above sound like highly IQ level based on lots of repeated laboratorium experiments. Or not? And what is a measurement? A measurement is simply the application of a magnetic field and look what the electron does; does it go this way or that way?

Electron pairs are always made up of electrons having opposite spins, in chemistry a pair of equal spins is named a non-bondig or an anti-bonding pair. Chemical bonds based on electron pairs cannot form if the electrons have the same spin.

Now grab a strong magnet, say one of those strong neodymium magnets and place it next to your arm. Quickly turn the magnet 90 degrees or turn your arm 90 degrees, what does happen? Of course ‘nothing happens’ but if electron spin would follow that 50/50 rule, in that case 50% of your electron pairs would become an anti bonding pair. As such your flesh and bones whould fly apart…

Now does that happen? Nope njet & nada. As far as I know it has never been observed that only one electron pair became an anti-bonding pair by a simply change of some applied external magnetic field…

As far as I know the above is the most easy day to day experiment that you can do in order to show that electrons simply do not change spin when a different magnetic field is applied…

I have been saying this for over five years but as usual when it comes to university people there is not much of a response. In that regard physics is just like the science of math: It has lost the self cleaning mechanisms that worked in the past but now in 2020 and further those self cleaning mechanisms do not work anymore. It is just nothing. It is just a bunch of people from blah blah land. So let’s wait & see if one of those ‘electron idiots’ will get the Nobel prize tomorrow.

Waiting, just waiting. Will another electron idiot get it?

Luckily I have a brain for myself. I am not claiming I am very smart, ok may be compared to other humans I do well but on the scale of things like understanding the universe I am rather humble. I know 24/7 that a human brain is a low IQ thing, but just like all other monkeys it is the only thing we have.

Very seldom the human brain flares up with a more or less bright idea that simplifies a lot of stuff. A long time ago I wanted to understand the general theorem of Pythagoras, I knew of some kind of proof but I did not understand that proof. It used matrices and indeed the proof worked towards an end conclusion but it was not written down in a transparent way and I just could not grasp what the fundamental idea’s were.

So I made a proof for myself, after all inside math the general theorem of Pythagoras is more or less the most imporatant theorem there is. I found a way to use natural induction. When using natural induction you must first prove that ‘something’ is true for some value for n, say n = 2 for the two dimensional theorem of Pythagoras. You must also prove that if it holds for a particular value of n, it is also true for n + 1. That is a rather powerful way to prove some kind of statement, like the general theorem of Pythagoras, holds for all n that is holds in all dimensions.

I crafted a few pictures about my old work, here they are.

It is that form of a normal vector I am still proud of many years later.
This is a basic step in the proof of the so called ‘inverse Pythagoras theorem’.
And the same two ‘math cubes’ but now with a black edge.

It is from March 2018 when I wrote down the ‘inverse’ theorem of Pythagoras:

And from March 2017 when I wrote the last piece into the general theorem of Pythagoras:

Ok, let me leave it with that and in about 10 hours of time we can observe if another ‘electron idiot’ will win the 2020 Nobel prize in the science of physics. Till a future post my dear reader. Live well and think well.