The Royal Institution had a new video out from somebody of that Australian group that wants to build a quantum computer based on electron spin. The official version of electron and nuclear spin is that it is a tiny magnet, that is what I name the “tiny magnet model”. I think that is nonsense because this tiny magnet model leads to dozens and dozens of problems that are just not logical if electrons are in fact tiny magnets. The last years more and more I wonder why those physics professors themselves don’t see all those holes in their version of electron spin. It is not a secret that I think electrons are magnetic monopoles, as such they have a one pole magnetic charge and until proven otherwise my understanding is that this charge is permanent. That means there are two kinds of electrons, one kind with say north pole magnetic charge and a kind with south pole magnetic charge.
When about seven years ago I came across the results of the Stern-Gerlach experiment from 1922, after a bit of thinking my estimation was that likely electrons are magnetic monopoles. For years I tried to shoot holes into that idea of magnetic monopoles, but that always failed and after a few years I accepted the idea.
In this post I want to look explicitely at why there are three spectral lines in that what mysteriously is named a spin one particle. With a spin one particle as shown in the video, they mean an atom with two unpaired electrons.
One of the many dozen things wrong with the tiny magnet model is as next: The Stern-Gerlach experiment does a beam of silver atoms split into two beams, the explanation for the opposing acceleration is that an inhomogeneous magnetic field is used together with the very mysterious property of electrons “anti-aligning” theselves with this applied magnetic field. But in a lot of other things, say the energy levels of electrons in atoms under the Zeeman effect, you never see a gradient of the magnetic field but straight in your face the actual strength of the magnetic field. That is one of the many things that is not very logical.
Lets start with some validation that David Jamieson is a believer of the church of the tiny magnets:
If you accept that electrons are not tiny magnets a lot of solar phenomena become better understandable. If you realize that if you have a cylindrical shaped portion of plasma and that rotates along it’s central axis, it will spit out a lot of electrons and because that column or cylinder is now very positively charged the magnetic field it creates becomes much more stronger. That is precisely what we observe with all those flares and stuff.
Well for seven years on a row people like David are not interested at all. So one way or the other you just cannot claim these people are scientists, in my view it is a bunch of weirdo’s married to some form of weird groupthink. The groupthink is that all things must be tiny magnets, they have zero experimental proof for that so these people are weird.
But lets not dive into politics and why we pay these weirdo’s a tax payer funded salary, lets go into what they can do good: spectral analysis. Now sun spots are places with strong magnetic fields (rotating column or cylinder kind of stuff) and in the picture below they take a line over such a sun spot and look at the spectrum of a particular frequency. Remark that a line here is the projection of a plane so you can have many contributions into the end result, so why do we see three spectral lines?
Ok, what does David mean with a spin one particle? That’s not a photon but he means an atom with two unpaired electrons. The light you see is from electrons jumping down in energy in those atoms (I don’t know what element, what atom it is). But the situation is easy to understand:
1) Some of those atoms have two unpaired north pole electrons, 2) Some of those atoms have two unpaired south pole electrons, and 3) Some of those atoms have two different electrons.
That would explain the two outer lines, the middle line must be caused by electron jumps where there is much less magnetic field.
Please remark that the ‘line’ is actually a plane so the electron emissions can come from any height.
The sun my dear reader is a complicated thing, but if people like David can’t explain stuff like the corona temperature why should you believe his version of electron spin? It is time to go to the video, at about 32 minutes into the video the spin stuff starts:
That was it for this update. Thanks for your attention, think well and live well.
Last week I finally found out after seven years that there is indeed at least one repeated Stern-Gerlach experiment. It is well known in quantum mechanics that the Pauli matrices can be used to calculate the probabilities for finding electrons into a particular spin state. And in a repeated SG-experiment, if you turn the magnetic field 90 degrees the Pauli stuff says it is 50/50 divided. If you example you first applied a vertical magnetic field and after that some horizontal magnetic field, you should get 50% of the electrons having spin left and 50% spin right.
But if you try to do a search on a term like “Experimental proof for the Pauli matrices” or just “Repeated Stern-Gerlach experiment” never ever serious popped up in the last seven years.
Seven years ago I arrived at the conclusion that it is impossible that electrons are “tiny magnets” or for that matter have a bipolar magnetic field. A lot of things can be explained much better and more logical compared to mystifications like the Pauli exclusion principle. If electrons are magnetic monopoles, in that case it is logical that if they form pairs they must have opposite magnetic charges. And with the electron pair we already have a detail where the ususal model of electrons as “tiny magnets” fails; two macroscopic magnets are attracking only if their magnetic fields are aligned. If two macro magnets are anti-aligned, they repel. So how the hell is it possible that two electrons only form a pair if they have opposite spins, only if they anti-align? What I still don’t understand is why people like Pauli, Einstein, Feynman etc etc never remarked that it is nonsense to suppose that electrons are tiny magnets. Remark there is zero experimental proof for the assumption that electrons are tiny magnets. They just projected the Gauss law for magnetism on electrons without ever remarking you must have some fucking experimental proof. In the next picture you can see the experimental setup; you see two Stern-Gerlach experiments and in the middle is a inner rotation chamber where they try to flip the spin of the electrons.
So Einstein must have given it a thought, this SG-experiment and never realized the impossibility of the Gauss law for magnetism for electrons.
Last week I found a nice pdf upon the Frisch-Segrè experiment and I would like to quote a few hilarious things from it:
“The physical mechanism responsible for the alignment of the silver atoms remained and remains a mystery” and quoting Feynman, “… instead of trying to give you a theoretical explanation, we will just say that you are stuck with the result of this experiment … ”
This is also the first time that I see this ‘problem’ actually stated; how is it possible that a tiny thing like an electron anti-aligns it’s spin with the applied external magnetic field? That is very very strange, for example water molecules are tiny electric dipoles and if they meet an electric field the only thing they want to do is to align themselves with that electric field. Why do electrons gain potential energy in a magnetic field?
To understand how crazy this is: If you go outside and throw away a bunch of rocks, do half of those fall to earth and the other half flies into space? Nope, in the end all rocks try to get at the state of minimal potential energy.
But if you view electrons as magnetic monopoles this weird detail of climbing in potential energy is’n there any longer: an electron with say a north pole magnetic charge will always go from the north pole to the south pole of a macroscopic magnetic field. And vice versa for an electron with a south pole magnetic charge. The weird energy problem isn’t there any longer. You can compare that to a bunch of electrons and protons entering an electric field; they feel opposite forces and that is how they both lower their potential energy.
At last let me give you the pdf. This pdf is not very useful because it is written by one of those weirdo’s that keep on believing that electrons are tiny magnets…
Once more I want to remark that if you see a physics professor doing his or her blah blah blah thing on electron spin, they just don’t have any serious experimental proof that electrons actually have two magnetic poles. Furthermore, none of them has a problem with that. So why are we funding these weirdo’s with tax payer money?
Ok, that was it for this post. Thanks for your attention.
Of course you can’t correct every typo you make. But now it was in the heart result of the previous post so it must be corrected. In the previous post I showed you a way of calculating the determinant of a 4×4 matrix using 2×2 minors. As such I used things like a ‘complementary minor’ inside a square matrix. The typo is easy to understand, it must not be comp(| AB12 |) but this: | comp( AB12)|. The vertical stripes mean you must take the determinant of what’s between them so the typo is that I took the determinant too fast. First you must find the complementary minor and after that take the determinant…
This correctional post is two pictures long, first I show you the faulty one and after that the correct one.
My guess is most readers who tried to understand the previous update did find for themselves it was very faulty. But I could not just ignore it because it was in the main result although the main result is not earth shaking math. Anyway it is what it is and now it’s corrected.
I remember that in the past a few times I tried to write determinants of say a nxn matrix in determinants of blocks of that matrix. It always failed but now I understand the way the matrix version of the theorem of Pythagoras goes, all of a sudden it is a piece of cake. In this post I only give an example of a 4×4 matrix. If you take the first two columns of such a matrix, say AB, this is now a 4×2 matrix with six 2×2 minors in it. If we name the last two columns as CD, for every 2×2 minor in AB there is a corresponding complementary 2×2 minor in CD. For example if we pick the left upper 2×2 minor in our matrix ABCD, it’s complement is the right lower 2×2 minor at the bottom of CD. If you take the determinants of those minors and multiply them against the determinants of their complements, add it all up with a suitable +/- pattern and voila: that must be the determinant of the whole 4×4 matrix.
This method could more or less easily expanded to larger matrices, but I think it is hard to prove the +/- pattern you need for the minors of larger matrices. Because I am such a dumb person I expected that half of my six 2×2 minors pick up a minus sign and the other half a plus sign. Just like you have when you develop a 4×4 determinant along a row or column of that matix. I was wrong, it is a bit more subtle once more confirming I am a very very dumb person.
I skipped giving you the alternative way of calculating determinants: The determinant is also the sum of so called signed permutations on the indices of their entries. If you have never seen that I advice you to look it up on the internet.
Because I skipped excisting knowledge widely available already, I was able to do the calculation in just four images! So it is a short post. Ok ok I also left out how I did it in detail because writing out a 4×4 determinant already has 24 terms with each four factors. That’s why it is only four pictures long in this post…
If you want you can also use that expression as the determinant as a sum of signed permutations. It is a very cute formula. Wiki title: Leibniz formula for determinants. And a four minute video, it starts a bit slow but the guy manages to put in most of the important details in just four minutes:
Ok, that was it for this post. I filed it under the category ‘Pythagoras stuff’ because I tried similar stuff in the past but only with the knowledge into the matrix version of the Pythagoras theorem, it all becomes a bit more easy to do.
To be honest this post is not carefully thought through. I felt like starting to write and at first I wanted to write it more into the direction of a proof based on the volumes of the parallelepiped and it’s expansion columns. But then I realized that my cute calculating scheme of turning a nxd matrix A into a square nxn matrix AP could go wrong. So I wanted to address that detail too but I hadn’t thought it out enough.
The answer to why it can go wrong is rather beautiful: Take any nxd matrix (of course the number of columns cannot exceed n because that is not a proper d-dimensional parallelepiped) say a 10×3 matrix. The three columns span a parallelepiped in 10-dimensional real space. The ‘smallest’ parallelepiped that still has a three dimensional volume is one of the many minors possible in the 10×3 matrix. So with ‘smallest’ I mean the parallelepiped that uses the least number of coordinates in our 10-dimensional space. Now in my calculating scheme, an algorithm if you want, I said you to start at the top of the matrix A and add more and more columns. But if A is made of just one 3×3 minor, say at the bottom of A, it is crystal clear my calculating scheme goes wrong because it now produces only zero columns.
And if that happens, when in the end you take the determinant of the square matix AP you get zero and of course that is wrong. These are exceptional cases, but it has to be addressed.
Of course there is no important reason for the calculation scheme to start at the top of the matrix, just start at the position of the lone 3×3 minor. In general: If you start with a nxd matrix, ensure your first expansion column is not a zero column. After that the next expansions all should go fine.
This post is five pictures long. If you haven’t mastered the calculation scheme of how to turn a non-square matrix into a square matrix you must first look up previous posts until you have a more or less clear mental picture of what is going on with this.
Ok, that was it for this post on the highly beautiful calculation scheme for the volumes of parallelepipeda in higher dimensions.
To be honest I like the Unzicker guy; he is from Germany I believe and he alsways attacks the standard model for particles. According to him there are zillions of problems with the standard model and likely he is right with that. But he fully buys the crap that electrons must be magnetic dipoles without any experimental confirmation at all. So that I post a video of him talking weird stuff about electrons is not a way to rediculize him. On the contrary, because he always tries to attack the idea’s inside the stadard model he in itself is a perfect example as why the physics community swallows all those weird explanations upon electron spin.
For myself speaking I think that electrons don’t have their spins ‘up’ or ‘down’. I don’t think that they are tiny magnets with two magnetic poles but in itself they are magnetic monopoles that come with only one magnetic charge… My estimate is that this magnetic charge is a permanent charge, that means there is no such thing as spin flip of an individual electron.
In the Unzicker video Alexander asks for help about differentiation on the quaternions or so. Well have I done my utmost best to craft all kinds of spaces where you can integate and differentiate, stuff like 3D complex numbers, 4D complex numbers etc, comes a weirdo along asking about the quaternions… On quaternions differentiating is a true horror and that is caused by the property that in general the quaternions don’t commute. I wrote a one picture long explanation for that. The problem is that differentiation on say the square function on the quaternions destroys information. That is why there is no so called ‘Complex analysis on the quaternions’, it just doesn’t exist. Ok, lets go to the first video. It is not that very good because he constantly throws in a lot of terms like SO2 and SO3, but for an audience like physics people that is allowed of course.
Because it is still the year 2022, it is still one hundred years back that the Stern-Gerlach experiment was done. The next short video is relatively good in it’s kind; there are a lot of videos’s out there about the SG experiment and most are worse. In this video from some German at least there are some more explanation like it is not the Lorentz force because these are silver atoms. But as always in all explanations out there it misses as why exactly electrons do anti-align themselves with the applied external magnetic field. For example water molecules are a tiny electric dipole, if you apply an electric field to clean water, all these tiny electric dipoles for 100% align with the electric field. So why do electrons not do that?
As always: electrons being magnetic monopoles is a far better explanation for what we observe. But all these physics people, one hundred percent of them have no problem at all when there is no experimental evidence that electrons are indeed ‘tiny magnets’. That is what I still don’t understand: Why don’t they see that their official explanations are not very logical when you start thinking on these explanations? Why this weird behavior?
Ok, lets hang in why differentiation on the quaternions is a total horror.
The last video is a short interview with John Wheeler where he explains the concept of positrons being electrons that travel back in time. At some point John talks about an electron and positron meeting and anihilate each other. Well it has to be remarked that this doesn’t always happen. They can scatter too and why could that be? Well it fits with my simple model as electrons being magnetic monopoles. Positrons and electrons only kill each other if they have also the opposite magnetic charge…
Ok, that was it for this post. Thanks for your attention.
A long long time ago you likely learned how to calculate the sine in a rectanglular triangle. And that was something like the length of the opposite side devided by the length of the hypotenuse. But now we have those simple expressions for the volume of a non-square matrix, we can craft ‘sine like’ quotients for the minors of a matix against it’s parent matrix. I took a simple 4×2 matrix so 4 rows and 2 columns, wrote out all six 2×2 minors and defined this sine like quotient for them. As far as I know this is one hundred percent useless knowledge, but it was fun to write anyway. Likely also a long time ago you learned that if you square the sine and cosine of some angle, these squares add up to one. In this post I formulated it a little bit different because I want to make just one sine like quotient and not six ones that are hard to tell them apart. Anyway, you can view these sine like quotients as the shrinking factor if you project the parent matrix onto such a particular minor. With a projection of course you leave two rows out in you 4×2 parent matrix, or you make these rows zero. It is just what you want. The parent 4×2 matrix A we use is just a two dimensional parallelogram that hangs in 4D space, so it’s “volume” is just an area. I skipped the fact that this area is the square root of 500. I also skipped calculating the six determinants of the minors, square them and add them up so we must get 500. But if you are new to this kind of matrix version of the good ol theorem of Pythagoras, you definitely must do that in order to gain some insight and a bit of confidence into how it all works and hangs together.
But this post is not about that, it only revolves around making these sine like quotients. And if you have these six quotients, if you square them and add them all up, the result is one. Just like sin^2 + cos^2 = 1 on the real line.
Please notice that the way I define sine like quotients in this post has nothing to do with taking the sine of a square matrix. That is a very different subject and is not a “high school definition” of the sine quotient. This post is just three pictures long, here we go:
So after all these years with only a bunch of variables in most matrices I show you, finally a matrix with just integer numbers in it… Now you have a bit of proof I can be as stupid as the average math professor…;)
But serious: The tiny fact all these squares of the six sines add up to one is some kind of idea that is perfectly equivalent to the Pythagoras expression as some sum of squares. Thanks for your attention.
The reason I name these two posts a sketch of (a proof) of the full theorem of Pythagoras is that I want to leave out all the hardcore technical details. After all it should be a bit readable and this is not a hardcore technical report or so. Beside this, making those extra matrix columns until you have a square matrix is so simple to understand: It is just not possible this method does not return the volume of those parallelepiped. I added four more pictures to this sketch, that should cover more or less what I skipped in the previous post. For example we start with a non-square matrix A, turn it into a square matrix M and this matrix M always has a non-negative determinant. But in many introductionary courses on linear algebra you are thought that if you swap two columns in a matrix, the determinant changes sign. So why does this not happen here? Well if you swap two columns in the parallelepiped A, the newly added columns to make it a square matrix change too. So the determinant always keeps on returning the positive value of the volume of any parallelepiped A. (I never mentioned that all columns of A must be lineary independent, but that is logical we only look at stuff with a non-zero volume.)
Just an hour ago I found another pdf on the Pythagoras stuff and the author Melvin Fitting has also found the extension of the cross product to higher dimensions. At the end of this post you can view his pdf.
Now what all these proof of the diverse versions of Pythagorean theorems have in common is that you have some object, say a simplex or a parallelepiped, these proof always need the technical details that come with such an object. But a few posts back when I wrote on it for the first time I remarked that those projections of a parallelogram in 3D space always make it shrink by a constant factor. See the post ‘Is this the most simple proof ever?’ for the details. And there it simply worked for all objects you have as long as they are flat. May be in a next post we will work that out for the matrix version of the Pythagoras stuff.
Ok, four more pictures as a supplement to the first part. I am to lazy to repair it but this is picture number 8 and not number 12:
For me it was a strange experience to construct a square matrix and if you take the determinant of the thing the lower dimensional volume of some other thing comes out. In this post I will calculate the length of a 3D vector using the determinant of a 3×3 matrix. Now why was this a strange experience? Well from day one in the lectures on linear algebra you are thought that the determinant of say a 3×3 matrix always returns the 3D volume of the parallelepiped that is spanned by the three columns of that matrix. But it is all relative easy to understand: Suppose I have some vector A in three dimensional space. I put this vector in the first column of a 3×3 matrix. After I add two more columns that are both perpendicular to each other and to A. After that I normalize the two added columns to one. And if I now take the determinant you get the length of the first column. That calculation is actually an example below in the pictures.
Well you can argue that this is a horrible complicated way to calculate the length of a vector but it works with all parallelepiped in all dimensions. You can always make a non-square matrix, say an nxd matrix with n rows and d columns square. Such a nxd matrix can always be viewed as some parallelepiped if it doesn’t have too many columns. So d must never exceed n because that is not a parallelepiped thing.
Orthogonal matrices. In linear algebra the name orthogonal matrix is a bit strange. It is more then the columns being orthogonal to each other; the columns must also be normalized. Ok ok there are reasons that inside linear algebra it is named orthogonal because if you name it an orthonormal matrix it now more clear that norms must be one but then it is more vague that the columns are perpendicular. So an orthogonalnormalized matrix would be a better name but in itself that is a very strange word and people might thing weird things about math people. Anyway loose from the historical development of linear algebra, I would like to introduce the concept of perpendicular matrices where the columns are not normalized but perpendicular to each other. In that post we will always have some non-square matrix A and we add perpendicular columns until we have a square matrix.
Another thing I would like to remark is that I always prefer to give some good examples and try not to be too technical. So I give a detailed example of a five dimensional vector and how to make a 5×5 matrix from that who’s determinant is the length of our starting 5D vector. I hope that is much more readable compared to some highly technical writing that is hard to read in the first place and the key idea’s are hard to find because it is all so hardcore.
This small series of posts on the Pythagoras stuff was motivated by a pdf from Charles Frohman, you can find downloads in previous posts, and he proves what he names the ‘full’ theorem of Pythagoras via calculating the determinant of A^tA (here A^t represents the transpose of A) in terms of a bunch of minors of A. A disadvantage of my method is that it is not that transparant as why we end up with that bunch of minors of A. On the other hand the adding of perpendicular columns is just so cute from the mathematical point of view that it is good to compose this post about it.
The post is eight pictures long so after a few days of thinking you can start to understand why this expansion of columns is say ‘mathematical beautiful’ where of course I will not define what ‘math beauty’ is because beauty is not a mathmatical thing. Here we go:
Inside linear algebra you could also name this the theorem of the marching minors. But hey it is time to split and see you in the next post.
A few posts back I showed you that pdf written by Charles Frohman where he shows a bit of the diverse variants of the more general theorem of Pythagoras there is. At school you mostly learn only about the theorem for a triangle or a line segment and it never goes to anything else. But there is so much more, in the second half of this post I show you three vectors in 4D space that span a parallelepiped that is three dimensional. From the volume of such a thing you can also craft some form of Pythagorean theorem; that paralellepiped can be projected in four different ways and the squares of the four volumes you get equals the square of the original parallelepiped. I would like to remark I hate that word ‘paralellepiped’, if you like me often work without any spell correction this is always a horrible word…;)
Now my son came just walking by, he reads the title of my post and he remarks: It sounds so negative or sarcastic this ‘full theorem’. And no no no I absolutely do not mean this in any form of negative way. On the contrary I reconmend you should at least download Charles his pdf because after all, it was better compared to what I wrote on the Pythagoras subject about 10 years ago.
But back to this post: What Charles names the full theorem of Pythagoras is likely that difficult looking matrix expression and from the outside it looks like you are in a complicated space if you want to prove that thing. The key observation is that all those minor matrices are actually projections of that n x k matrix you started with. So that is what the first part of this post is about.
The second part is about a weird thing I have heard more than once during my long lost student years: Outside the outer product of two vectors we have nothing in say 4D space that gives a vector perpendicular to what you started with. I always assumed they were joking because it is rather logical that if you want a unique one-dimensional normal vector in say 4D space, you must have a 3D dimensional thing to start with. That is what we will do in the second part: Given a triple of vectors (ABC) in four dimensional space, we construct a normal vector M to it. With this normal vector, after normalization of course, it gives now a handy way to calculate the volume of any of those paralellepiped things that hang out there.
Ok, lets go: Six pictures long but easily readable I hope.
All that is left is trying to find back that link to Charles his pdf.