Showing posts with label Physics. Show all posts
Showing posts with label Physics. Show all posts

Tuesday, March 10, 2020

Why the Theory of the Multiverse is Unscientific

Note: I wrote this in another place in response to someone's question. We were discussing this video about the theory of multiverses that was posted to YouTube a few days ago.


Sean Carroll explains that there are two possibilities, either the branching is infinite or finite (I think that covers just about everything).

With infinite branching there would have to be an infinite amount of energy and time, because with an infinite amount of branching drawing from a finite pool of energy, at some point the energy would be exhausted. So there would have to be an infinite amount of energy. I heard Sean Carroll make exactly this argument at a talk a few years ago.

By his own admission, if there were an infinite amount of energy and time then ALL possible universes will be seen. This includes our current universe, complete with 13+ billion years of existence, AND it would mean that an exact copy of this universe exists except in that universe Harry Potter, Hogwarts, and magic are all real and part of the universe. There also exists a universe that consists of ONLY the room you are currently in, complete with computer/phone/tablet that give the appearance of you interacting with the outside world, but the outside world doesn't exist. And whenever you leave the room that universe will cease to exist.

Again these are all arguments that Sean Carroll makes himself.

But this means that there exists a universe like our own where there is no such thing as the multiverse and it behaves as if there were only one timeline and absolutely no branching. There would also exist a universe that looks exactly like ours but the multiverse exists just as he describes it.

But how do we know which one we are in? Because if we go looking for evidence and don't find it then we don't know if we are in the universe with no multiverses, or if we could be in a universe where it is impossible to detect the multiverse. Either way we can never know until we find evidence to conclusively show it one way or the other. But that evidence does not exist. So by his own logic, if the branching is infinite, and there is an infinite amount of energy, then we can never know if multiverses don't exist, or if we just haven't seen them yet. Either way the universe remains the same and the concept of multiverses means nothing.

Next, the second possibility he brings up is a finite amount of branching. This solves the infinite energy problem, but without an infinite amount of energy there is only one universe, even if locally it functions like a multiverse. (By local, that can mean just on earth, or within the visible universe 13+ billion lightyears away. On these scales the room around you and all the galaxies 13+ billion lightyears away are all considered local.) A locally branching multiverse would go against our current understanding of physics, but it would still be possible if and only if the things making it possible are beyond our current ability to understand, calculate, or observe.

In the video he admits this (starting at time 14:40) where he says "but the details hinge on quantum gravity, cosmology, the theory of everything, and all that stuff." He is essentially saying that there exists something that we don't know about right now that makes the multiverse work. This is essentially a scientific variation of the God of the Gaps argument. It comes down to "there is no other way for this to work, so there is something, we don't know what it is, that makes it work." You can call it quantum gravity, cosmology, the theory of everything, God, Bob your neighbor, magic, a lazy dog, or anything you want it doesn't matter. It simply is a "thing" that makes it possible for his idea to be correct.

But again, we have nothing that specifically points to a multiverse, so it doesn't matter what you call the thing that makes it possible, because in the end it is something undefined to support something unproven. You could just as easily say, "The magic of Harry Potter is real but the details hinge on quantum gravity, cosmology, the theory of everything, and all that stuff" and be just a scientifically valid. Which means not at all.

"But! There is MATH behind it!"
That's nice. You can put math behind any idea. It doesn't make it real.

There is no evidence that points us specifically towards a multiverse. There is no physical motivation other than to resolve a paradox that we made for ourselves. The paradox does not come from the universe. It comes from how we think about the universe. We do not resolve a paradox that we made ourselves by insisting that the universe change to fit our ideas. Our ideas must change to fit reality.

Sunday, December 29, 2019

Objectivity, Quantum Mechanics, and Bad Logic

Recently I read a paper where some physicists were testing interesting repercussions of quantum mechanics. Their work made eye-grabbing headlines such as Objective Reality Doesn't Exist, Quantum Experiment Shows. That's quite a bold claim considering the long history in philosophy specifically on the question of subjectivity vs. objectivity. Seeing them confidently dismiss objectivity I knew I had to see why they were so confident of their conclusions. Unfortunately the physicists walked naively into a well known philosophical topic, like a knitting club into a Black Sabbath concert.

So let us take a look at what lead them to their conclusion that objective reality does not exist. This may get a little technical, but stay with me.

They designed an experiment that could specifically test objectivity (O), locality (L), and free will (F). They point out that previous proofs have established L and F, and LF together, so if they test OLF then the experiment can establish or undermine the idea of objectivity (O).

In their paper the physicists defined objectivity as the existence of "observer-independent facts, stating that a record or piece of information obtained from a measurement should be a fact of the world that all observers can agree on—and that such facts take definite values even if not all are “co-measured”." Thus they are testing whether a measurement creates an objective fact for everyone or if the result depends on the observer.

Locality means each measurement must happen in such a way that the result only depends on local factors and not on any other measurement in the experiment. (In technical language the future light cone of one measurement cannot be in the past light cone of another measurement.)

The last assumption is free will. The people in the experiment must be free to make their choice and not have their choice predetermined in any way, either by the first measurement or any outside factors.

In the experiment they have someone (who they call Alice) who is given a choice. In a lab Alice has a friend making measurements of individual photons. The friend can turn on a detector, thus choosing to measure any photons that come through, or turn it off, thus choosing not to make any measurements.

Alice can check what her friend measured, but cannot know before hand if her friend even had photons to measure. So Alice might check on her friend and find they didn't make any measurements because no photons passed through the lab.

Alternatively Alice can check to see if a photon went through the lab and that her friend could have possibly measured it. But Alice cannot check if her friend actually measured it and check if a photon went through the lab (in technical language Alice can measure whether her friend is entangled with a photon or not).

So there are four possibilities. The friend can choose to measure and Alice looks and confirms that her friend made a measurement. The friend can choose not to measure and Alice looks and confirms that no measurement was made. The friend can choose to measure and Alice doesn't look but establishes that her friend is entangled with a photon that could have possibly been measured but doesn't know if her friend measured it. Or the friend chooses not to measure and Alice doesn't look but checks if her friend is entangled with a photon.

In the actual experiment the physicists didn't have someone checking in on their friend. Instead "Alice" was a series of polarization filters and a detector, and the friend was a separate set of filters and a detector.

To make sure they were measuring what they thought they were measuring, the physicists set up their experiment such that there were two sets of detectors, "Alice" and "Bob", each with their own "friend". The photons measured by the friends were entangled which means what one measures will be the compliment of the other. This symmetry allows for easier checking for statistically significant results.

What the physicists found after running their experiment for a few months is that collectively their assumptions were being violated. That is, OLF all together were not consistent with what was being measured. Because both L and F, and LF had been proven through other proofs that meant that O was the weak link.

If OLF all together was true then regardless of whether Alice or Bob checked if their friends made a measurement, or checked if they could have made a measurement, then there should have been confirmation that a measurement was made either way. If Alice and Bob never checked directly on their friends but only measured whether they had an opportunity to make a measurement then they could know just from that whether their friends made a measurement.

But their results showed that neither Alice nor Bob could tell if their friend had made a measurement without actually looking at their friend. They could not infer from the mere possibility of a measurement that a measurement had been made.

This sets up a paradox. The friends can make a measurement and know definitively what state the photon is in, but for Alice and Bob it is as if a measurement had never been made. That is, there is no way for Alice a Bob to know that a measurement had been made without actually measuring their friends.

The "facts" created by their friends (i.e. what they measured) cannot transfer to any other observer without a real transfer of data. If a measurement is made, that measurement does not somehow change the fabric of reality such that we can know that something was measured, even if we don't know what the result was. That information only stays with the first observer and does not transfer in any way to any other observer without the second observer observing the first. It is like there is no master list of all interactions and measurements in the universe that someone could hypothetically look at.

But the major assumption is that once something has been measured, that fact is established and everyone in the universe should, in principle, be able to agree with it. But Alice and Bob cannot establish if a measurement even took place, thus for them that fact does not exist. The interpretation given by the physicists is that two realities exist concurrently. One where a measurement was made, and one where it was not made. This, they conclude, shows that what is "reality" depends on what measurements were made by a subject. Hence everything is subjective and there is no objective reality.

Now the physicists do not make their boldest claims in their paper. They keep it strictly technical and straight forward. I can find nothing to disagree with in their paper. I may not know the technicalities of their set up, but their experiment is not something so unknown that it makes their work suspicious. It is an experiment that would probably be talked about in an undergrad physics class. So their set up is standard and well known. Their methods are standard and well known.

I find nothing wrong with their conclusions in their paper. But like I mentioned at the start, they walked naively into a well known philosophical topic, like a knitting club into a Black Sabbath concert.

In their writings outside their paper they make conclusions that are not logically backed up by their research, nor even backed up by logic at all. Their experiment does show something interesting that there cannot be a "master list" of the states of all particles in the universe. That is, when an observer makes a measurement, that does not change the universe in such a way that anyone else can know that a measurement was made, let along know what the value was.

This shows that all observers are independent subjects. Each of our observations are our own. But this does NOT disprove objective reality.

In setting up the experiment, even just as a thought experiment the physicists had to assume "objective reality". They first had to have "observers" that could be established as observers. They had to have photons and everyone agree, objectively, as to what a photon was, and how to measure it. All of these are objective and are necessary for us to even have the concept of subjective.

David Hackett Fisher, a famous historian, wrote an entire book berating historians for their use of egregious logical fallacies. Even though his book was directed at those of his own profession, its concepts apply to all areas of academic study. It should perhaps be required reading for anyone getting a PhD. In his book he takes a moment to comment on "subjectivity" vs. "objectivity".
"'Subjective'" is a correlative term which cannot be meaningful unless its opposite is also meaningful. To say that all knowledge is subjective is like saying that all things are short. Nothing can be short, unless something is tall. So, also, no knowledge can be subjective unless some knowledge is objective." -- Historians Fallacies by David Hackett Fischer, Footnote 4, page 42-43.
Essentially these physicists, though very gifted, stepped out of their field of study and made a freshman level mistake of logic. In their hyperbole they jumped to an illogical conclusion. As one of my philosophy professors might say, "They abused the fundamental definitions of the words such that the words had no meaning." They showed that subjective knowledge is a thing and then extended that knowledge to encompass all knowledge. They fell victim to my favorite logical fallacy.

Without thinking about it they set up an objective experiment to show that objective reality does not exist.


Normally this is the point where someone would say, "Don't step out of your own field!" but I think that is also a fallacy. Instead I say, "Before jumping to conclusions try to think critically about your conclusions to see if they make sense. If you think you have come to some major conclusion that entirely overthrows everything we know, 99.998% of the time you messed up somewhere."

Tuesday, October 31, 2017

Why the Neutron Star Collision was an Important Observation

I am going to take a moment to actually talk about astrophysics (I know, it's a shock! To actually talk about what I do).

Back on August 17th the LIGO gravity wave observatory detected gravity waves from the collision of two neutron stars. This was quickly followed by the detection of a gamma ray burst by the Fermi space telescope, and then a host of other observations from other telescopes. This event quickly became the most heavily observed single event in astronomy. There are several good general reviews of what happened that are very accessible to the average reader (NY Times, NPR, Veritasium, and one more in depth from Phys.org, there is also a whole webpage about the detection with links to many papers).

So I won't go over the basics because you can get that from other sources, but I will talk about some of the more technical implications to the detection.

First, gold. Gold is very important in astronomy because it is very heavy and hard to make. Usually astronomers ignore the different types of elements, we are usually only concerned about hydrogen and helium. There is a joke in astronomy that the astronomer's periodic table of elements is the simplest one since we only have three elements, hydrogen, helium, and "metals". We refer to everything that is not hydrogen and helium and metals (that includes decidedly non-metallic elements like nitrogen, oxygen, carbon, and neon). It keeps things simple.
But when it comes to metals we are concerned with what we call metallicity, that is the relative amount of metals compared to hydrogen and helium. Because hydrogen and helium make up a combined 98% of the mass of the universe, on a cosmological scale everything else is just a rounding error. But on smaller scales (small, as in the size of a cluster of galaxies) the amount of metals becomes important. Except for a tiny amount of lithium, everything that is not hydrogen and helium was produced inside stars, one way or another.

When a star goes through its life cycle it will return a significant amount of mass back into the interstellar medium in the form of stellar winds. For a large star with an initial mass of 10-20 times the mass of the sun, the star may return 80-90% of its mass to the interstellar medium in the form of stellar winds, or a nova or even a supernova. So while a star may start out as almost entirely hydrogen and helium when it forms, the gas that returns to the interstellar medium will be slightly enriched with metals, that is, the metallicity will go up. This enriched gas that has been returned to the interstellar medium will go on to form a second generation of stars, which will still be almost entirely hydrogen and helium, but now with a tiny fraction more of metals. The process will repeat, and each time it does the gas will become more enriched with metals. In order to have enough metals that rocky planets such as the earth can form the gas must go through at least 20 star formation and enrichment cycles. To date, the highest metallicity ever observed in a star is about three times the metallicity of the sun.
Because of something called the nucleon binding energy only elements up to iron can be produced in the conventional way inside of stars. Anything heavier than iron needs to be produced in another way because it is en endothermic reaction and requires huge amounts of energy. Some heavy elements are produced in supernovas but there is a subtle problem with that. While there certainly is enough energy in a supernova explosion to produce the heavier elements, most of the mass that is blown off in a supernova is hydrogen. It would take an immense amount of energy and a string of complex, and highly improbably reactions to convert that much hydrogen into elements as heavy as gold and lead.

In nuclear physics there are two processes which can produce heavier elements, named the r-process and the s-process (unimaginatively the r and s stand for rapid and slow respectively). The s-process takes less energy and a much lower neutron flux, and can happen over long time scales. In the s-process heavy elements are built up slowly one neutron at a time, and allows for neutrons to decay into protons.

The r-process requires huge amounts of energy, and a truly astronomical neutron flux. A parent element is bombarded with a huge number of neutrons to make an extremely unstable isotope. The only thing keeping it from decaying into smaller elements is the rate at which neutrons are bombarding the nucleus. While a supernova has enough energy for the r-process, there is a distinct lack of neutrons to achieve the neutron flux necessary for the r-process to take place. It does happen, just not at a high enough rate to explain the amount of gold, lead, uranium, and other really heavy elements we observe in the universe. So while normal stellar processes can explain the carbon, oxygen, and nitrogen we see, and novas and supernovas can explain the amount of aluminum, iron, nickle, and zinc we see, neither of those can explain the amount of gold, silver, lead, and uranium we see.

This is where merging neutron stars come in. In the collision there certainly is enough energy for the nucleosynthesis to take place, and because there are two massive sources of neutrons being ripped apart, the problem of meeting the minimum neutron flux is solved. But up until now we had no hard confirmation of neutron star mergers, much less finding evidence of r-process production of heavy elements. It has been suspected for years, but only with the LIGO detection and the followup observations of the nova remnant has this been confirmed. With the detection of r-process reactions in the remnant of the merger we can now conclude that almost all of the gold, uranium and other very heavy elements come from neutron star mergers.

Below is an updated periodic table of elements that shows where each element comes from. Some come from more than one source, but you can see just how many elements were detected in the neutron star merger. The purple shows elements from neutron star mergers. It is much more than gold. This is why the detected merger was so important. It showed us where many of the heavy elements came from like gold, silver, lead, platinum, iodine, bismuth, tin, uranium, and many more.
From Wikipedia. You can see a larger version here.
For my own research this does not change what I am doing. While most very heavy elements come from neutron star mergers, most metals come from normal stellar process that are already accounted for in my models. The detection of the merger does not change the overall metallicity, but it does slightly change the relative ratios of the different metals. But this change is not significant enough to impact what I do. The overall metallicity is extremely important, but the heavy elements are still too rare to make any difference. This could be more relevant to those who work on rocky planet formation, and also nucleosynthesis in interstellar space. But my work does not get down to that level of refinement. I work on fairly large objects where individual stars, and even supernovas are below the level of my resolution. So while it is exciting, it does not affect my work directly, but at some point someone may provide a slight modification to some of the models that I use that may change a few of the minor outputs.

Tuesday, September 10, 2013

Science Problems in the Kolob Theorem

A number of people have asked me to expound upon my first and second reviews of the book The Kolob Theorem (KT). I am hesitant to do this because there are many things in the KT that are obviously incorrect to me, but for those who do not spend a good portion of their time studying and learning astronomy then these errors are not so obvious. Thus it may take a bit of explaining, but if you are interested then read on.

[Again a strong note: I will not speak about the theological implications of the Kolob Theorem. This is only to point out that the book uses some very shaky science to establish its claims. I will again point out that this book was not written by an astronomer. There is a reason why no LDS astronomer has written a book like this, and that is because we recognize that we do not know enough about God, or the universe for a book like this to be written.]

The main text of the KT starts on page 24 (at least in the version that I have access to, linked above). There are a few pages of introductory material before, with some pictures and I will get to those, but my analysis starts on page 25.

I will start near the bottom with the quote by Fred Hoyle. First off, Fred Hoyle was a well known astronomer in his day (he is credited with inventing the phrase "the big bang") but between the publication of his book, Frontiers of Astronomy, in 1955 and the publication of the KT in 2005, our understanding of astronomy has changed more in those 50 years than in the previous 200 years. Yes it really has changed that much. So to rely on an astronomy text book published in 1955 to establish a speculative theory in 2005 automatically places the KT on shaky ground.

Quoting Fred Hoyle Dr. Hilton states:
"The stars in the elliptical galaxies and the stars in the nuclei of the spirals are old stars like the stars in the globular clusters. In contrast, the highly luminous blue giants and super giants are young stars. Young stars are found only in the arms of the spirals."
Our theory would require such a distinction, for the stars in the nucleus must be of a celestial type created first and those of the outer regions of a terrestrial or telestial type and created later.
So the structure that Dr. Hilton sets up for his first corollary, that is central to his entire theory, requires older stars created first to be in the center of galaxy with progressively younger stars as you move out from the center. While it is true that there are many old stars in the center of the galaxy, there are also many old (and in some cases older) stars out in the disk of the galaxy away from the center. The question of where stars form, and how many and how fast they form is still a major area of research. But to illustrate the point here are two pictures of galaxies that are actively forming stars in their center regions.
NGC 3079: The center of the galaxy is an active star forming region. The star formation is actually so strong that it is pushing gas from the center out of the disk of the galaxy. Image credit: NASA and G. Cecil (UNC, Chapel Hill).
M 82: This is actually a composite of images taken from three different telescopes. The green-yellow is from the visible light, the blue is X-rays, the red is infrared. The plane of the galaxy goes from bottom left to top right, but the bright red and blue that is perpendicular to it is hot gas that has been blown out of the galaxy from recent star formation in the center. Image credit: NASA/JPL-Caltech/STScI/CXC/UofA/ESA/AURA/JHU.
As can be seen from the above images there is star formation (and A LOT of it) that happens in the center of the galaxy. Despite what Fred Hoyle states, young stars are not only found in the spiral arms of galaxies. There are plenty of young stars there, but there are even more young stars in the center of galaxies, it's just that they are packed closer together and are mixed with more older stars. As a matter of fact the oldest stars that we can track are found in Globular Clusters (such as M 80), which are most definitely not in the galactic center.

Now on to page 26! (Yes, I have only covered one page.)

Dr. Hilton quotes astronomer Joseph Ashbrook to make the case that there is a dense cluster of old stars in the center of the Milky Way. He sates:
"The core of the Milky Way Galaxy would also possess a tightly packed system of ancient, huge stars in the very heart of the galaxy".
Two things here, he quotes Ashbrook, who may have been a great astronomer (and I can detect nothing wrong with anything Dr. Ashbrook says), but the referenced paper comes from 1968, and the title of the paper refers to Andromeda as a "nebula", not a galaxy. I will get to that in a moment. But the main problem here is that Dr. Hilton is confusing the fact that there is a high amount of stellar mass in the center of the galaxy with there being very massive ("huge") stars in the center of the galaxy. At about this point I probably lost about 98% of my readers and your eyes are glazing over. Stay with me.

So what is the difference between a high amount of stellar mass and a high number of massive stars. Let me explain it like this. Consider two groups of people, groups A and B. In group A the total mass of the group is 20,000 lbs. In group B the total mass is 15,000 lbs. Which group is "bigger". Depends on what you mean by "bigger". It turns out that group A is a group of 300 elementary school students, group B is a NFL football team. Which group is "bigger" now that you know that? Overall the school kids are "more massive" than the football players, but taken individually the football players are 2-8 times more massive than the children. So just because there is a lot of mass in a group of people doesn't mean that the individual people are massive. It means that there could be a lot of them.

The same thing with stars. Just because there is a lot of stellar mass (Dr. Hilton quotes the figure of 10% of the total mass of the galaxy) in the center of the Milky Way, doesn't mean that the individual stars are "huge". As a matter of fact, having "huge" stars would actually be detrimental to his theory, because it turns out that the youngest, most recently formed stars are the most massive, while the oldest, slowest burning stars are the smallest. It seems counterintuitive, but this is precisely the type of mistake that Dr. Hilton makes again and again that undermines his theory. So the "ancient" stars cannot be "huge". In fact the oldest stars would probably be about the same size as our sun, just a lot older.

Next Dr. Hilton moves into a black hole and never makes it out. He gives a definition of a black hole as:
"A black hole is defined as a compact energy source of enormous strength of the order of a billion solar masses".
How can I explain how this definition sounds to a professional astronomer. Assume you had had just finished reading an article about an election in England and read that a new prime minister had been appointed. You turn to me and ask, "What does that mean, 'to be appointed prime minister'?" And I respond, "That is when the Pope comes and crowns the prime minster and puts him on the throne of England." There happens to be a British citizen who over hears this who promptly goes into convulsions and runs screaming from the room, yelling something about "Ignorant Americans". As ridiculous as my statement that a prime minister is "appointed" by being crowned by the Pope is, Dr. Hilton's definition of a black hole is just as ridiculous. It's the kind of thing that keeps astronomy professors up at night fearing that their students may go out into the world and give definitions like that. If you want to know what a black hole is try Wikipedia.

When it comes to black holes there are two types. Stellar black holes that have a mass approximately equal to that of the sun, and super massive black holes that have a mass ranging from 1,000,000 to 10,000,000,000 times the mass of the sun (1e6-1e10 Msun). Stellar mass black holes are all over the place, while super massive black holes are slightly more rare. It is assumed that at the heart of every galaxy, dwarf galaxy, galaxy remnant, compact dwarf galaxy, and ultra compact dwarf galaxy is a super massive black hole. Dr. Hilton later wonders if it is possible that there is a super massive black hole at the center of the Milky Way. Well he doesn't have to wonder since we have already found it! In fact we found it in 1974! (For someone who uses out of date materials he sure missed this one.)

Here is a plot of the orbits of the stars surrounding the Milky Way's central black hole (know as Sagittarius A*):
Source: Wikipedia.
So continuing on, Dr. Hilton tries to tie in the motion of the stars around the central black hole to "rotation" (a key word from the Book of Abraham, he is trying so hard to make the connection, but this isn't going to do it despite his best efforts). He states:
"One measurement of the radial velocities near the nucleus of Galaxy M 84, in the area of Virgo, shows a speed of rotation of 400 kilometers per second at a distance of only 25 light years from the center."
Wow! 400 km/s that sounds fast! For comparison the sun is doing a positively leisurely 220 km/s in its gentle stroll around the Milky Way. But just a second, where did this 400 km/s number come from. These velocities were measured in M84 (also known as NGC 4374) which is about 60 million light years away, i.e. too far to resolve individual stars. So this velocity is more likely a velocity dispersion, that is a difference in velocities averaged over many thousands of stars, this is not the actual velocity of the stars. The max velocity of an individual star is about half of that, so about 200 km/s, which is about how fast the sun is moving. He tries to make something of this much later (chapter 5), but the motions of stars gets very complex, and I'm not going to get into that. Let's just say that you have to distinguish between the motion of individual stars and the motion of the overall galaxy, and sometimes that can be a very tricky thing. Think of the difference between the motion of individual water molecules and the motion of ocean waves. They are not necessarily the same thing.

On page 26 he mentions "Galaxy 87" I assume he means M87, or Messier 87. Messier was the name of an astronomer who spent his time looking at the stars and made a catalog off all the interesting things he saw in his telescope that weren't stars or planets. He made a list 110 "Messier objects" in 1771 that kept interfering with his hunt for comments. Little did he know that he made one of the most important astronomical object catalogs that would define observational astronomy for the next 200 years.

Wow, we are only 3 pages into the text and I already want to quit. I'll mention one more thing.

On page 27 he mentions a star 3000 times the size of the sun. He uses the word size, but fails to understand the fine distinctions he just ran rough shod over. The "3000 times the size of the sun" here obviously refers to physical size, meaning radius, and not mass. There are no stars out there with a mass 3000 times the mass of the sun. Some astrophysicists think that you can get up to 70 or 80 times the mass of the sun, but generally upper cut off value is 40 times the mass of the sun, and those stars are very few and far between. So to have a star that is "3000 times the size of the sun" must refer to physical size, or radius. With stars, it is very tricky to match physical size with mass. They don't always correlate the way you would think. This is another case as I mentioned previously where this is precisely the type of mistake that Dr. Hilton makes again and again that undermines his theory. You see, black holes are the smallest things out there. The vast majority of them are smaller than the earth, and are even smaller than Pluto. It's just that they have a lot of mass in a very small space.

OK to finish off I'll just leave my remaining notes in their raw format. I only got to page 33 (starting on 24) before I gave up and decided that if I went on this post would be way too long.

p. 28 J Ruben Clark quote (concept of galaxy has changed since then, other galaxies were known as extragalactic nebula, other galaxies were still known as extragalactic nebulae until the mid 1950's, and there are even a few references to them in the 1960's. concept of galaxy not pinned down until 1960's.)

p. 29 A galaxy is self gravitating. It's a concept that has it's finer issues.

p. 33 Fred Hoyle again, yes there is dust in the center! Star formation! Lot's of it. Need dust to form stars. No dust, no new stars, it's that simple. Where ever there is dust there are stars forming. Where ever stars are forming there is dust. Dust in the galaxy is a very complex issue. It is no where near as simple as he makes it out to be. There are entire books written on dust in the Interstellar Medium.

Andromeda--How the picture was made--mention false coloring
http://apod.nasa.gov/apod/ap040718.html
http://www.robgendlerastropics.com/M31Page.html
http://www.robgendlerastropics.com/M31Pagegrey.html

Link to false coloring of images.
http://hubblesite.org/gallery/behind_the_pictures/meaning_of_color/

In the end the science issues are so dense and numerous that it is impossible to extract them from the book and from his theory. The only thing to do is to scrap the whole thing and do something else.

References

Joseph Ashbrook, The Nucleus of the Andromeda Nebula, Sky and Telescope, February 1968
Bok and Bok, The Milky Way 5th Edition, Harvard University Press, Cambridge, MA, 1981
Fred Hoyle, Frontiers of Astronomy, New York, Harpers, 1955

Sunday, March 3, 2013

Yes I have been neglecting my blog

It is true, I have been neglecting my blog. I have plans for brief reviews of ancient astronomy in the Book of Abraham, more posts about names in the Book of Mormon, and ancient rituals of conversion detailed in the Book of Mormon. But right now my time has been consumed with getting my code to work for my research. The other day I cleaned up two files that I had been working on for some time and I removed all the lines of code that I didn't need. All together I think I removed over 4,000 lines of code. It took me a few days to work through it and determine what I need and what I don't need.

In the meantime I uploaded a video of one of my simulations that I did as a proof of concept. I did this simulation to show my advisor that my fixes to the code are working and that more time should be devoted to the approach that I have been taking. He was impressed.



Here is the description that goes along with the video:

This is a short video that I made of a simulation that was done as a proof of concept. It shows a fractal distribution of density in the center of a star forming galaxy. This simulation is done using the MHD code Athena. The box size covers 1 kpc in each dimension and has a grid size of 128 in each dimension. The star forming region injects mass and energy into the center region of the simulation.

The proof of concept is for a flux fixer that allows the code to handle high kinetic energy situations where the total kinetic energy is a significant fraction of the total energy.

The visualization was done using Paraview.

Monday, December 17, 2012

Misconceptions of Misconceptions of Physics

Finally I am posting something about physics! Don't leave just yet. I will try to keep it on a general level.

On YouTube there is a channel that I like to watch called MinutePhysics. Normally the short videos are pretty good and the channel creator does a good job at explaining some common (and some uncommon) physics in a short and intuitive way. So I was rather surprised when he posted a video about common misconceptions in physics that itself perpetuated common misconceptions in physics. Here's the video for you to watch so I can refer to it.


There are two things that are problematic in this video that I want to address. I will give a short explanation here and then a longer explanation further down.

  1. Teaching Newtonian gravity is not lying. He is trying to make the point that light, even if it is massless, is still affected by gravity, which Newtonian gravity does not predict. True, but he makes his point by saying that teaching Newtonian gravity is lying to students. Newtonian gravity is still alive and well and is fundamental to of almost all undergraduate and even graduate (and post graduate) areas of study. The idea that teaching Newtonian gravity is wrong is a big misconception and this video simply perpetuates the misconception.
  2. Just because you have an equation that you can stick numbers into and a calculator to calculate it out to an arbitrary number of digits of precision does not mean that it has have physical meaning. I have to fight this misconception every semester with almost all of my students. It is harder to fight this misconception than it is to fight the "misconception" of a Galilean vs. Lorentz transformations.

1. Teaching Newtonian gravity is not lying.
The misconception that Newtonian gravity is fundamentally wrong, and therefore useless, is so prevalent among people that when mostly well informed individuals ask me about my research they are shocked to learn that I still use Newtonian gravity. They usually say something along the lines of, "I rememeber learning about Newton in high school/college, but you are probably way beyond that." They would be even more shocked to learn that most of the cutting edge research in physics uses Newtonian gravity and not relativity. It seems like every semester I have at least one or two students who express the idea that everything undergirding Newtonian gravity is wrong and that therefore all the collective wisdom, intuition, insight and knowledge of people who have used Newtonian gravity, or even Newtonian physics in general, is somehow invalid.

2. An equation and a calculator do not make reality.
Every semester I have to fight a major misconception with my students. I don't mean the pre-meds who take the introductory physics classes, or the "I don't know what I'm doing with my life students, but I have to take this class to get some sort of degree." students. I mean physics majors who are in their senior year and who have been through many physics classes already. I have to fight the misconception that just because the students have an equation and a calculator or computer that can calculate something to an arbitrary number of digits, that the result, to that precision, has meaning for the real world. This is a misconception that physicists of all stripes have to fight every day. And unfortunately this short video perpetuates this myth.

Let's take the sheep example. He gives an example of a sheep riding a train and says if you have a train going 2 mph and a sheep on the train is moving forward at 2 mph with respect to the train then,
2 mph + 2 mph = 4 mph
which he promptly declares to be false. He then proceeds to give a short explanation of how to add velocities in special relativity and produces the equation for adding velocities in special relativity (for those who want to know he is merely pointing out the difference between a Galilean vs. a Lorentz transformation. One assumes light has no speed limit and the other one does. But, by his definition what he presents is also false, since a Lorentz transformation is also incomplete, so he merely traded one misconception for another. Fail.).

But, according to him, if we want to be honest we have to use the special relativistic equation and see that the sheep is only moving 3.999999999999999964 mph. That is a difference of 0.000000000000000036 mph. The problem is, how did he measure that? No really! That is a perfectly valid question in physics, I am not just trying to ask a trite, funny question. If he claims that the sheep is actually moving 0.000000000000000036 mph slower than it should because of special relativistic effects then he will have to actually measure that. The problem is (as many, many, many, many of my professors over the years have pointed out), the sheep is made up of atoms. You can't calculate something, get a result and say, "This is how the world works." because you are ignoring the fact that everything is made up of real matter. You can't separate that fact or you will end up in trouble.

To give you an idea of why this is problematic let's take our result, the difference of 0.000000000000000036 mph, and see what this means. Suppose the sheep and the train move together for one hour, what would be the difference in how far they have moved based on this difference?
0.000000000000000036 mph x .44704 (m/s)/mph = 1.61e-17 m/s
(that's meters per second instead of miles per hour)
1.61e-17 m/s * 3600 s = 5.8e-14 m
So if you let the sheep walk on the train and let the train go for one hour, then after one hour the difference that you would expect between using a relativistic vs. a non-relativistic calculation would be 5.8e-14 m or about 60 femtometers. To give you an idea of how small this is that is about 4 times larger then the nucleus of a uranium atom. Not 4 times larger than a Uranium atom, 4 time larger than the nucleus, which is very, very, very small. This distance is still about 3000 times smaller than the radius of an atom.

So is it wrong to use Galilean transformations and Newton's laws? No. If you can find me a wooden meter stick that has tic marks that go down into the femtometer range then you could say that Newton was wrong. But if you can't actually measure that accurately then it is wrong to say that the standard way we think about adding velocities is wrong. Just because someone came up with an equation and you can stick the numbers into a calculator and get a result does not mean that it has any real world interpretation.

Now, as a physicist I am well aware of relativity, but this is an abuse of it. To say that Newton (and Galileo) were wrong because they didn't have access to a meter stick which measured femtometers, is itself wrong. To ignore real world considerations and then calling people who have to (and had to) deal with those real world considerations wrong is to ignore something fundamental about physics, and that is we live in a real, physical universe. And you can't ignore that fact. Even when teaching relativity.

[PS: If you want to see another example of abuse of equations, consider "Why Pigs Don't Diffract Through Doorways".]

Sunday, October 2, 2011

Quick thought on particle-wave duality

Just a quick thought on particle-wave duality today.

So how should we think about fundamental particles? Are they points in space or are they waves of probability? One person I read recently said that it just depends on the type of experiment. If you want a wave then make a wave experiment, but if you want a particle then make a particle experiment. This may seem like a simple explanation except for the fact that in some cases they behave like particles in wave experiments and also behave like waves in particle experiments.

So how do I think of particles in the particle-wave duality debate? The way I see it, they behave like waves when they travel, but they behave like particles when they interact with other particles. So when I explain it to students I say, "It travels like a wave, but interacts like a particle."

As a side note, this is an interesting way of looking at it since it would seem that particles travel like they have mass, but when they interact, they interact like they have no mass. Hmmm... so the wave nature of particles give them mass? There's an interesting thought.

Wednesday, December 30, 2009

Optical Absorption

For one of the classes that I teach the students use a device called a spectrophotometer. Basically this device works by shining a light through a sample and measuring the amount of light that gets through the material. This device is more than a fancy light meter which simply measures intensity. This device can break down the transmitted light into specific wavelengths and thus measure the amount of light transmitted for each wavelength. By measuring the amount of light transmitted we can figure out the the absorption which can tell us something about the specific properties of the material.

In this case my students are trying to determine the band gap energy for different semiconductors and also whether or not the band gaps are direct or indirect. I plan on posting about this lab at some point, but for now I just wanted to mention one thing that I measured. While I was in the lab one day I put my glasses in the detector and measured the absorption of the plastic lenses. The results I got are shown in the graph below (click on it to view it bigger).
The wavelengths that I measured extended from 800 nm (infrared) to 200 nm (ultraviolet). In the graph I only show down to 400 nm because below that my glasses absorbed too much light and the detector could not get accurate readings. Still you can see a sharp increase in the the absorption as you approach 400 nm. Other than the region below 400 nm my glasses did not absorb much light. This can be seen by the low measurements that are all below 0.05. To give you an idea of what this means, all throughout the visible spectrum ( ~400-720 nm, except for the tiny bit close to 400 nm) my glasses absorbed less than 10% of the light. An absorption rating of 1 would mean that an object absorbs 90% of the light, a rating of 2 would mean it absorbs 99%, 3 99.9% and so on. Typically an absorption rating of 7 (99.99999% absorbed, yes that can be measured) or higher is considered fully opaque.

As you might notice the absorption of the plastic in my glasses is not constant. There are small bumps or waves in the graph. These bumps are a result of the the thickness of my glasses and some slight internal reflections inside the material. These bumps are useful because from the size and shape of the bumps we can figure out the index of refraction for the material that my glasses are made from. That would be useful information to know if I were the one designing the glasses since the index of refraction determines the corrective ability of the lenses, and the curvature needed to make them work. Unfortunately I did not put my glasses in straight in the detector which meant that the path of the beam was not normal (perpendicular) to the surface of the lens. This makes the measurement more difficult (i.e. it would take more work than I'm willing to put into it) so I was not able to figure out the index of refraction for my glasses.

Sunday, March 22, 2009

Self Correcting Theories

Previously I have commented on differences between Physics and Philosophy. Recently I noticed another example of the difference between the two: Physics is self-correcting, while Philosophy is not. To show this I will use an experience I had recently while grading homework for a class that I am a TA for.

The homework problem involved finding the velocity of a bicyclist as he peddled along. The students were given a constant power output from the rider and then they had to find the rider's terminal velocity. They went through a process where they solved some basic equations and then they added in air resistance and solved the more complex equations numerically (which means they wrote a computer program to solve the equation).

One of my students solved their equation and found the terminal velocity and then displayed a graph of their velocity versus time, or how their velocity changes over time:

If you look at this graph and understand what it is showing you will realize that the student made a mistake. To understand the mistake allow me to interpret this graph physically. At the beginning of the race the biker starts at rest (the extreme left of the graph where t = 0 and v = 0). They start to speed up (velocity is positive) and they continue to ride faster and faster. At some point their speed maxes out and they begin to slow down. At this point this is fairly normal for anyone riding a bike, but what happens next according to the graph is not normal. The rider continues to peddle at the same rate that they started out at, but because of air resistance they begin to slow down (I must point out that there is no wind) until after about 27 seconds they begin to travel backwards! And this happens simply because they are riding through air, with no wind! Obviously there was a problem with their equation, and upon inspecting their equation I quickly found what their error was, they had forgotten one variable in a single term of their equation, and that made all the difference.

My point with this is when a mistake was made we could look at the results and compare them with reality to see if it made sense. In other words we had a check or test to make sure we had done it correctly or whether our theory was correct. In the case of Physics, no matter what the theory is, the science itself has built into it a self correcting mechanism. Whenever a mistake is made something can be done to test it and to correct the mistake. The critical test come in comparing our calculated results to the physical world. All our theories and calculations mean nothing if they cannot predict what is observed in the physical world. It does not matter how "elegant" a solution is, it is of no worth if it does not correctly demonstrate some physical principle.

Philosophy on the other hand does not have this ability, as a matter of fact the vast majority of Philosophy openly denies and/or questions the validity of the very thing that can show whether or not an idea is correct (such as Descartes' method of doubt and Kant's noumenon). This means that Philosophy, as a whole does not have any mechanism to check and to self-correct incorrect theories. With no way to check whether or not something is correct or true it is no wonder that there is so much confusion in Philosophy. If the same were true of physics we would live expecting people to ride backwards on their bikes, rocks to fall up and electricity to flow into wall sockets. We would loose all sense of order and normalcy in the world. If you wonder why Philosophy is so hard to understand, or why it returns ideas that are inconsitent with expereince, it is because it has divorced itself from the very thing that would give validity to its ideas.

It is like quitting your job and then wondering why you are poor.

Sunday, October 19, 2008

My Plan

I thought that I should layout my plan for research in the near future. Part of my research for my senior thesis included looking into a way of explaining flat rotation curves without including dark matter as a generating gravitational potential. I was looking into a claim that by using GR the flat rotation curves could be reproduced without including dark matter as a correcting term. While the particular method I looked into had some problems there was one paper that I stumbled across that showed that using GR they could give a slight correction to the rotation curves before they needed to include dark matter.

In conjunction with this there was a paper published by several astronomers giving a correction to the amount of normal baryonic matter present in a galaxy. They did this using surveys of the number of face of galaxies versus the number of edge on galaxies and saw that there was a discrepancy, there were more face on than edge on. Thus they concluded that there must be more dust in the galactic halo than they previously thought. This added another correction to the mass, and the assumed amount of dark matter, in a galaxy.

So my plan is to take all these corrections and add them up and see if it amounts to a significant correction to the amount of dark matter that must be present in a galaxy. If this holds then it will also add a correction to the amount of normal and dark matter in galaxy clusters. This will additionally change everything else.

There is also a theory out there that attempts to explain dark energy in a new way. Essentially the argument goes that we might actually live in a "cosmic bubble" of low density space. If this were true then it would create the illusion of an accelerating universe. So it would be possible to explain what we see in terms of what we already know instead of making an appeal to some unknown type of matter. The idea that lead to the introduction of dark energy was the Copernican idea that the universe is for the most part homogeneous, that is, where we live does not have any unique properties that makes it different from the rest of the universe. This idea continues on with the statement that the universe is homogeneous on a large scale.

But what if there is some local inhomogeneities? This leads me to think of another guiding principle. Always try to explain what we see in terms of what we know, without making an appeal to some unknown stuff with unknown properties. What we know is stronger than what we do not know. A known is (almost) always a better explanation than an unknown. So if we make an assumption that leads us to conclude that there is something unknown governing the universe and determining what we see, then perhaps we need to rethink that assumption or how we are applying that assumption.

So I plan to investigate these things and see if I can't make sense of what we see in terms of what we already know without making an appeal to things unknown.