Sunday, February 18, 2018

The Fundamentals of Philosophy

This is by no means a comprehensive introduction to philosophy, but it contains the basics. This is not what you would get by taking an intro philosophy course, mostly because at no point in any philosophy course would you typically get an introduction like this. These topics would be covered but never in a simple systematic way.

If physics is the study of how things move, and how the universe works, then philosophy is the study of how we think, and how we view the universe.

There are three main branches of philosophy: Metaphysics, Epistemology, and Ethics.

Metaphysics deals with how we fundamentally understand how the universe works, and what makes up the universe. This sets what we consider to be "allowable". This includes things like whether matter is made up of atoms, strings, the four elements, or plum pudding. But it also includes how we view consciousness, the mind, and how we think.

If you want to know the metaphysics of a person then ask them to define, or describe consciousness. The answer they give will not tell you anything about what consciousness actually is, but it will teach you about their metaphysics.

Metaphysics can be broken down into several (sometimes non-exclusive) broad categories. Dualism is the idea that there are two (or dual) components to reality. The material, or physical world, and the world of "the mind" or spirit, or rational thought. Monism is the idea that there is only one nature and both matter and the mind derive from the same source. Materialism is the idea that everything is the result of the fundamental laws of physics and the interactions of particles. Materialists deny that "the mind" is a separate thing apart from the firing of neurons in the brain. Materialists are by definition monists, but not all monists are materialists. One example of non-materialist monists are Mormons. Classical Christianity, Islam, and a few other worldviews are fundamentally dualist.

Epistemology deals with how we know, and know about the world. Perhaps Professor Truman G. Madsen, who spent five decades dealing with philosophical questions, put it best when he said, "There are really only five main modes that have been appealed to in all the traditions, philosophical or religious: an appeal to reason, an appeal to sense experience, to pragmatic trial and error, to authority—the word of the experts—and, finally, to something a bit ambiguous called 'intuition.'."

Science falls squarely under the umbrella of epistemology. If anyone gets into a discussion about what science fundamentally is, it ultimately rests on an endorsement of a particular epistemology, and nothing else. On a fundamental level, science does not have a preferred metaphysics* or ethics.

Logic is a subset of epistemology, and is not synonymous with it.

Ethics deals with what we value. Your ethics determines how you interact with other people and animals, and occasionally things. This area of philosophy is usually the messiest and most contentious.

Ethics is strongly related to Aesthetics, since what we value is generally what we find beautiful, and what we enjoy is what we value.

A huge portion of religion deals with ethical questions.

These three, Metaphysics, Epistemology, and Ethics are all related to each other, and mutually supportive, and occasionally at odds with each other. That is, our metaphysics determines our epistemology and ethics. While our epistemology informs us of our metaphysics and ethics, while our ethics reveals our metaphysics and epistemology. One cannot have a particular metaphysics without a corresponding epistemology, nor ethics. Because once one is set the others will automatically be defined.

The short descriptions I have given above are by no means exhaustive, nor are the examples I gave all there is. The key is to know that there are these three parts to philosophy, and they are interconnected, related, codependent, reinforcing, and co-determining. They are also by no means static. The particular metaphysics, epistemology, and ethics of someone will definitely change over time.

Also it is possible, and very likely, for someone to have a particular metaphysics, epistemology, or ethics, and not be able to explain or articulate their thought, any more than most people could give a complex breakdown and accounting of their diet, including any and all nutrients. It is also possible to have the particular implementation of one of the three be incompatible with the others (people who smoke may also exercise).

But generally the position of any one of the three will determine the other two. The interrelationships are complex and usually take a great deal of effort of understand.

Most changes in someone's philosophy are subtle and almost imperceptible, but if there is a major shift in one of the three then that will precipitate a reevaluation of the other two.

Doing philosophy correctly can help uncover your own particular metaphysics, epistemology, and ethics. It can show how the particular implementations may be incompatible. For example, if you really believe that everyone is created by God (metaphysics), then that should determine how you treat them (ethics).

We may not realize it but our ethics (and by extension our metaphysics and epistemology) are revealed by our aesthetics. Think about what movies, TV shows, books, stories, blogs, or news articles you like to consume. The kinds of entertainment we like, or the fictional characters we identify with, act as a litmus test for our ethics.

What art is hanging on your wall? Is it realistic, like photographs, or hyper realistic paintings? Or is it abstract? What is the subject matter? All these things can reveal how you fundamentally view the world, and how you think about knowing the universe.

Just as asking about how one views consciousness will reveal their metaphysics, what one surrounds themselves with, or their aesthetics, reveals their ethics, and ethics is codependent on their metaphysics and epistemology.

*I stated that science does not have a preferred metaphysics. That is not entirely true. Because science, as an epistemology, requires a corresponding metaphysics and ethics. It's just that the metaphysical and ethical demands of pure science are minimal. Most pronouncements regarding what we "should do" because of science, actually have nothing to do with science as an epistemology. When people make an appeal to "Science", or Science™, they are always, without realizing it, bringing a particular metaphysics and ethics along with them. Their assertions don't actually have much to do with the epistemological method known as science.

Sunday, January 14, 2018

Extreme Skepticism is Not Scientific

Many years ago I was in a research group meeting where we were discussing some astrophysics related idea. One of the other graduate students, referencing a particular paper under discussion, made the comment that some feature observed by astronomers is "apparently" caused by a certain type of star. My PhD advisor stopped the grad student right there and asked, "Apparently? What else could it be? There is nothing else that it could be."

He then went on to make the point that in science we are taught to doubt established explanations, but only if we have a reason to doubt it and have an alternate explanation. In this case he explained that expressing skepticism of the commonly accepted explanation was not warranted because we did not have an alternate explanation. The standard explanation did not have any "apparent" problems, it fit with everything else we know about astronomy, stars, and galaxies. So the impulse to maintain a skeptical attitude was not helpful unless we were willing to provide an alternate explanation. Science was about increasing our understanding, and skepticism for skepticism's sake does not do that. He told us that if we are going to doubt the established explanation, even by throwing in a seemingly innocuous "apparently", then we should have a better, alternate explanation.

So how does this fit with the popular conception of science. Typically science is portrayed as constantly asking questions, doubting previous conclusions, and maintaining a skeptical attitude. As one person put it, "science without doubt isn't science at all."

It is easy to find a plethora of quotes about how science doesn't go anywhere without people doubting, asking questions, and throwing out old ideas. Famous science communicators will proudly proclaim that all the old ideas we once thought to be true have now been shown to be false, and we may eventually overturn everything we now think to be true.

In science classes we emphasize the importance of asking questions, being critical, demanding rigor, and not accepting an explanation "just because". But is that how actual scientists do science? We may say that it is, but when it comes down to it scientists never actually "question everything". They only try one thing at a time, and even then they don't throw it out. They look for an explanation within established parameters. Even Thomas Kuhn's paradigm shifters did not "question everything" and throw out all "false ideas of the past." They worked within a larger epistemological approach that had established norms and rules that they did not try to undermine.

What gets lost when popular science communicators tell the stories of Galileo, Newton, and Einstein is that they weren't right because they questioned fundamental assumptions. They were right because their explanations were better than the alternatives.

Galileo wasn't right because he questioned the established science of the day. He was right because his explanation fit with what others took the time and effort to measure and observe. In some cases Galileo wasn't even "right" until hundreds of years later.

Einstein wasn't right because he "thought outside the box" and questioned the established wisdom. He was right because hundreds of other physicists conducted experiments to check if his theories fit the data better than other possibilities. Some of these tests were at first inconclusive, and had to be redesigned to make the necessary measurements.

When it comes down to it, always questioning things, and never accepting explanations and answers really isn't science. It's just ignorance. Maintaining a constant stream of skepticism is not conducive to science. Offering alternate explanations is. Just doubting is not the stuff of science. You must have a reason to doubt. The received wisdom, or standard explanation must fail in some way. Science happens not when we try to break things, but when we try to fix things that we find to be broken.

Sunday, December 17, 2017

My Favorite Logical Fallacy: The Suppressed Correlative

In a post that I wrote last year I noted how some paradoxes could be resolved if you just considered how there has been a subtle shift in the definition of a word. In the case of the heap paradox the word heap is inherently vague and has no exact number. But the paradox is created when the definition of heap is given an exact number, that is, the word is redefined to include additional information that was not present in the original definition. With the heap paradox the definition is narrowed, unknown to perhaps everyone involved, in the course of the discussion, thereby creating the paradox. It is precisely a paradox because the key word, term, phrase, or idea is modified without the knowledge of those involved.

Redefining words is not necessarily a problem. It is only a problem if confusion and misunderstanding results from the redefinition, or if by redefining the word something in our understanding is lost. The redefinition of words should only help increase understanding, not destroy it.

So if the heap paradox relies on narrowing the definition of a vague word, what about the opposite, extending or broadening the definition? The opposite falls under a group of fallacies related to what is called the correlative. For every defined word there is a correlative, or everything that is not covered by the definition of that word.

For example, the word cat refers to a type of four-legged, furry animal that eats meat. When I use the word cat what I mean by that is generally understood. This includes house cats, mountain lions, tigers, lions, lynxes, panthers, and all kinds of cats. Despite the word cat being well defined there is inherent fuzziness to what constitutes a cat. For example, should a civet fall under the definition of a cat?
A civet. Image from Wikipedia.
What about a mongoose?
Pictures of mongooses. Image from Wikipedia.
At this point it is stretching the definition of the word cat. There definitely are things that are cats and there are things that are not cats, like dogs, horses, rocks, and rivers. Between the two, cats and not-cats, there is a grey area where it is debatable whether or not the word cat applies. The ambiguity at the edge of the definition is not a problem, that is just the nature of language, but it is very clear that there are cats and not-cats, even if the dividing line is not always clear.

With the word cat there is a specific definition that is understood by everyone. Thus there are things that are cats. The correlative to cats are not-cats, or everything that is not a cat. If we take the definition of the word cat and broaden it so that it includes all four-legged, furry creatures then it includes things such as civets and mongooses, but also dogs, cows, and horses. By making the definition too broad we can include not-cats in the definition of cat. This is referred to as suppressing the correlative.

We can change the definition, or make it so broad that it begins to include things that should be part of the correlative. If taken too far we can shrink, or suppress the correlative to the point that the original definition becomes meaningless. Or in other cases we make the definition so broad that it subsumes the definition of another word, such as extending the definition of cat until it is practically synonymous with the word mammal. So by suppressing the correlative we include things in the definition that are not supposed to be there, and in some cases the definition is extended to the point that we already have a word for the broader definition.

Talking about cats and not-cats may seem a little ridiculous, and just a bit too theoretical. So are there real examples of someone suppressing the correlative? Yes! This is a real thing. People do it all the time. You would be surprised at how often it comes up. When I talk about the definition of the word cat and extending it so that it includes things like dogs and sheep it is tempting to say that no one would do anything so ridiculous. But they do. They do it all the time.

Real example #1 of suppressing the correlative: Bad definitions of socialism.

Recently I was having a discussion with one of my students and he casually stated that when it comes down to it any form of taxation and government spending is really just socialism. This idea has even been summarized in a meme that made the rounds last year.
A good example of suppressing the correlative. Relies on a bad definition of socialism.
According to this anything done by the government is a form of socialism. There are many people who would severely object to suppressing the correlative and calling a dog a cat, but would not realize that the above meme commits the exact same fallacy.

There are types of governments that are socialist, and there are governments that are not-socialist. What the above meme does, and what my student did, was to take the definition of socialism and stretches it beyond its original definition until the concept of not-socialism doesn't exist. Socialism by definition is when the means of production, distribution, and exchange is controlled by the government in a democratic system. This definition puts boundaries on what socialism is and is not. Under this standard definition of socialism fire departments, public schools, and highways do not fall under the definition of socialism (social security is in that grey area).

If we extend the definition of socialism to include these things then it erases the distinction between a socialist vs. non-socialist government. In the case of my student who took the definition of socialism so far that he defined it as all taxing and spending, then the definition became entirely useless because we already have a word for that, government. Because socialism is a form of government, that means there are governments that are not-socialist. If you extend the definition of socialism to essentially mean government, then you have included not-socialism in the definition of socialism, and have suppressed the correlative.

Real example #2: Taxation is theft.

Among strict libertarians there is a saying that "taxation is theft". Without going into too much detail this sentiment relies on the fallacy of suppressing the correlative. It takes the definition of theft and broadens it in such a way that it is rendered useless. Whatever their objections are to taxes and government in general, I would recommend that libertarians stay away from this particular logical fallacy. It never helps your cause to use particularly bad logical fallacies, because the type of people you will recruit to your cause will be those who do not mind, or do not know that they are using logical fallacies. It does not make for a rational movement.

Real example #3: Fake news.

Real fake news (yes that is a thing) is a real problem. But a certain politician and his copycats have taken to calling everything they don't like fake news. This was easy to do because fake news had a very vague definition to begin with, so unlike something like the definition of cat, it was very prone to be redefined. Unfortunately they have redefined it in such a way that the include not-fake news under the umbrella of fake news. And based on things I have seen shared on Facebook people actually believe this fallacy. See my comment about rational movements above.

Real example #4: A random discussion on Facebook about "Science".

Recently I came across this random discussion on Facebook about what constitutes science. One of the people involved (who I will refer to as Charles) made the error of suppressing the correlative and got called out on it (by someone I will call Daniel). What I find utterly fascinating about this discussion is just how beautifully "Charles" suppresses the correlative and is blissfully unaware of just how far he has gone. There was a lot more going on in the discussion but I will pick out the important parts below:
Charles: What's the alternative to science in obtaining knowledge? If someone believes there are better (or even alternate) methods for obtaining knowledge, the last thing I would say is "no there aren't". I would simply ask, "What are they?" 
Daniel: Personal acquaintance is a pretty good alternative. So is historical research. Mathematics isn't bad, either. Hearing stories. Listening to music. Reading novels. None of this is science. 
Charles: these to me are all science. 
Daniel: Then all true knowledge really IS science, just as -- if we define all sports as baseball -- all athletic activity is . . . cue drumroll . . . BASEBALL! (QED.) 
Charles: To me science is simply methodological measurement. The lack of methodological control in one's measurement makes something less scientific, but it also makes it less reliable.... "What was the size of the Roman army in 100BC?" is just as much of a scientific (measurable) question as "How are you feeling today?" 
Daniel: To suggest that Roman history is "science" is to broaden the meaning of the term "science" in a very unhelpful way. "Methodological measurement" has little or nothing to do with historiography, art criticism, understanding poetry, human acquaintance, music appreciation, or any other of scores of distinct fields of human knowledge. I'm reminded of Moliere's "Bourgeois Gentilhomme," who was astonished to discover that, all his life, he had been speaking PROSE! Most people, watching a "Jason Bourne" movie or reading a Charles Dickens novel, would be astonished -- and quite properly so -- to be told that, in doing so, they were doing "science." 
Charles: As I already mentioned, to the degree that someone is not being methodological with their categorization of information, they're NOT doing science. But I also fail to see how the lack of methodology in their information categorization can rightly be called knowledge.... Certainly there are other human activities that are not science. Without question.... Science is (and can only be) the only reliable method for obtaining knowledge. When we talk about music appreciation, it's either any random opinion being equal to any other, or it's based on a methodological categorization of information pertaining to music. I'm assuming people who earn degrees in this would argue true musical "knowledge" looks more like the latter. 
Daniel: Just as, in my view, you stretch the meaning of "science" so far as to compromise its usefulness, your overly board definition of "measurement" threatens to make the term useless. Anyway, in the same spirit, I might judge the essence of science to be close and disciplined observation -- rather like art appreciation. Thus, I think I'll argue that all true knowledge is art appreciation.
The essence of the fallacy, as pointed out by "Daniel" is that it takes a definition and stretches it to the point that it is no longer useful. The word was defined originally because there was a need to distinguish between A and not-A. When that necessary distinction gets washed out, and it results in a loss of understanding or confusion then it is a fallacy.

As I mentioned in a previous post, the purpose of recognizing logical fallacies is not so that you can go out and point them out to those who employ them, but rather so that you do not fall into that trap. Once you learn how to recognize this fault in thinking found in other people's arguments, you may be able to find it in your own thinking. Some other common places this pops up: the definition of faith, the definition of religion (especially those who try to call atheism a religion), and the definition of "theory".

PS: My observation that it is not a good idea to point out logical fallacies directly to other people (i.e. "Charles") is still true. I tested it, again, and yes I got the same result as before.

Sunday, November 12, 2017

Parables and Big Fish: Rereading Jonah

In Church lessons when we talk about the parable of the Good Samaritan the discussion centers on what we learn from it, and how it applies to our lives. Sometimes the discussion centers on why Jesus chose a Samaritan to be the protagonist in his parable, but the question of historical or factual accuracy never comes up. In talking about the parable we do not ask if there really was a historical Samaritan who stopped to help a man who was left for dead on the side of the road. Neither do we argue that a Samaritan would never actually stop to help a Jew, nor do we question Jesus for having the priest and the Levite walk by without stopping. Those questions in the story do not distract us from the point of the parable which is that we must treat everyone, even people we may not like, as our neighbor.

We do not mistake the parable for an actual story that must be analyzed for its historicity or whether or not the characters were based on real people. Even though the story is not historical we do not consider it to be untrue. We recognize the purpose of the story is not to convey history but to teach a moral.

This sets the parable of the Good Samaritan apart from some of the other stories in the New Testament. For example the story of Jesus’ baptism is not presented as a story with a moral, but as a historical event. With this story it is appropriate to discuss where exactly it took place, even to point out that it happened because there was much water there. For the story of Christ’s baptism it is appropriate, and probably necessary, to consider the historical context, while the parable of the Good Samaritan can be told independent of the historical context.

In an interview with LDS Perspectives Podcast Benjamin Spackman talked about the concept of genre in the Bible. He made the point that the Bible is a collection of many different stories, prophecies, teachings, laws, sermons, and histories. In essence it is a mix of many different genres and while it may be easy to separate some of the different genres, sometimes we can mistake the genre of a particular book or passage in the Bible and that can lead us to misunderstand the Bible.

If we were to focus our discussion of the Good Samaritan on whether or not it was historically accurate we would miss the point, that it is a parable, or a morality tale. If we were to talk about the baptism of Jesus as only an inspirational metaphor then we would be missing the obvious indicators of it as a historical event.

While some things in the Bible are clearly labeled as a parable or a prophecy or history, there are some things that are not clearly labeled. It is these things that can sometimes cause confusion. If we treat something as literal history when it is a parable, teaching tool, or a social commentary then we run the risk of looking beyond the mark and lose the intent of what we find in the Bible. If we make this mistake then we will go looking for historical events that never happened. We might get caught up in a pointless debate about whether or not there actually were any Samaritans who traveled on the road from Jerusalem to Jericho, and miss the point entirely.

While it may seem silly to debate the historicity of the Good Samaritan, there are other stories in the Bible that were written to teach a moral and provide social commentary, and not be literal history, but are unfortunately interpreted as history. One such story is the story of Jonah.

For many members, discussions about Jonah center on analyzing the motivations and actions of him as a real man, as well as whether or not someone could actually survive for three days in the belly of a whale. That is, the central concern that we have when we discuss Jonah is the historicity of the story. Sometimes we are more concerned with confirming the literal fulfillment of an apparent miracle than we are of learning the central message of the story. While Jonah was a real person, the actual book of Jonah never presents itself as a literal history, and there are some subtle things about it that set it apart from all the other writings of the prophets.

To give Jonah a little perspective we have to realize that Jonah, the historical man, lived less than 50 years before the Northern Kingdom of Israel was destroyed by Assyria, which capital city was Nineveh. The book of Jonah was not written by Jonah, and was most likely written after Israel was destroyed by armies from Nineveh. So whoever wrote the book of Jonah was making a somewhat ironic point by having Jonah go to Nineveh. In the story everyone, including the pagan sailors and all the illiterate citizens of Nineveh, obeyed God's commands. Everyone, that is, except the Israelite. The one who is supposed to be the most faithful and chosen of God is consistently less faithful than the illiterate (i.e. does not read the scriptures) and superstitious sailors and citizens.

These things, and a few others, mark the story of Jonah as a parable or a social commentary. It is not trying to pass itself off as literal history. For some this would seem to undermine the story of Jonah, but recognizing the genre of the Book of Jonah no more undermines it than recognizing that the story of the Good Samaritan as a parable destroys its lessons and power to teach. But by understanding it for what it is, we can get over the big fish and understand the message of Jonah.

Sunday, November 5, 2017

Sci-Fi Sanity Check

A friend wrote me an email a few days ago asking for a sci-fi sanity check. He had been reading a series of sci-fi books where some interesting physics was used to destroy a hostile alien race. He was wondering if the the methods used were credible and could actually be used in a hypothetical space battle. Below are his questions followed by my responses.

Question 1:

"First, they had a fleet of ships fire nuclear weapons while travelling close to the speed of light towards the battle. The idea was that the wavelength of the energy from the blast would experience an intense doppler effect, and hit the enemies at an incredibly high frequency. This gave the weapons far more devastating effects than would have otherwise been possible."

Response 1:

This question is one that I looked at and said, "Oh, there is an easy answer to that." But the more I thought about it the more complex it became. So I went and asked a real nuclear physicist in my department and we both thought about it for a while and concluded that the issue is irrelevant anyway, though there are some interesting physics questions underneath that made us scratch our heads, but none of which would make a better weapon.

The first problem is a misconception of where most of the energy in a nuclear blast goes. When an atom bomb goes boom it does release a significant amount of gamma radiation. That is just something that happens. When the uranium or plutonium fissions it will release a gamma ray, which is very energetic as far as electromagnetic radiation goes, and very dangerous, but the vast majority of the energy actually is carried away by the fission products. That is, the daughter isotopes of the nuclear reaction carry most of the energy in the form of kinetic energy. The gamma radiation will fry you, but the thing that actually creates the blast is the huge number of particles with huge kinetic energies that will rip you apart. The gamma radiation will ionize the atoms in your body, but the thing that will literally blast you to smithereens is the fissioned material with huge amounts of kinetic energy.

The gamma radiation will only carry away like 10% of the total energy from a nuclear blast, the rest is in the kinetic energy of the atoms after they split apart.

So if you accelerated it to high speeds the only part of the blast that would be doppler shifted would be the radiation. The particles that make up the most dangerous part of the nuclear weapon would not be doppler shifted. So the radiation (gamma rays) from a nuclear weapon that has been accelerated to the near the speed of light would get columnated, doppler shifted, and would be more energetic in the direction of motion, but you would have to be going at like 99.9998 % the speed of light before the doppler shift would make the radiation that much more dangerous than it already was. For example if the bomb was traveling at 90% the speed of light then it would only raise the energy of the gamma radiation by a factor of 4. To make a significant difference you would literally need to be going 99.9998% the speed of light. At that speed that energy of the photons would be shifted by a factor of 1000, but only on an extremely narrow beam directly directly in front of the blast. A deviation by as little as 0.5 degrees would decrease the doppler shift by a factor of 10 (an overall increase of only a factor of 100). So aiming would have to be extremely precise, which means the detonation would have have to occur right on target or any doppler advantage would be lost.

But the main issue with this scenario, and the thing that makes everything I discussed above pointless, is that at relativistic speeds the kinetic energy far exceeds any possible yield from the atom bomb. For every kilogram of plutonium there is a theoretical total yield of about 20 kilotons of TNT, which comes to about 8x10^13 joules of energy. A kilogram of lead moving at 10% the speed of light has kinetic energy of about 5x10^14 joules, or almost 10 times as much energy as you would get from an atom bomb.

If you take that up to 90% the speed of light, 1 kg of lead would have kinetic energy of about 1x10^17 joules, or about 20 megatons of TNT, which is about the yield of the largest hydrogen bomb the US ever tested. At relativistic speeds the kinetic energy of the case that holds the bomb would have orders of magnitude more energy than anything the atom bomb could produce. So accelerating an atom bomb to relativistic speeds in order to take advantage of the doppler effect is kind of like strapping a stick of dynamite to the front of a semi truck traveling at 100 mph. It's not the dynamite that will kill you.

The key is that at relativistic speeds everything has such high kinetic energy that normal stuff like atom bombs are tiny in comparison. Just getting a hunk of metal up to relativistic speeds would make it much more dangerous than any atom bomb.

Question 2:

"The second thing they did was accelerate a barren planet to a significant fraction of light speed (I recognize there are issues with that too, but they never tried to give a scientific explanation for doing that) and send it through the star where their adversaries lived. The result of the high speed mass applying high pressure and force as it passed through was to cause an increase of fusion (because of the mass pushing stellar material together really hard) which released a tremendous burst of additional energy, causing it to become a supernova."

...Yes? It is conceivable. The star would have to be pretty big to begin with, but in order to get a planet to do that it would need to be going really, really, really, really fast. Like 99.9998% the speed of light. In order to get the level of pressure needed to make that happen you would either need a really big planet (basically another star) or an earth sized planet traveling at 99.9998% the speed of light.

But then we run into the same problem as before. At that speed the planet would have a HUGE amount of kinetic energy. We are talking about 10^44 joules of kinetic energy. To put that in perspective, that is the same amount of energy as a type Ia supernova. So yes, crashing a planet into a star at 99.9998% the speed of light would probably cause the star to undergo a massive amount of fusion setting off a supernova. But in order to do that the planet would need to have kinetic energy equivalent to a supernova to begin with. It's kind of like dropping an atom bomb on an atom bomb in the hope of getting the second atom bomb to go off. If you got the planet going that fast, hitting a star with it would be pointless since just about anything you hit with it would release enough energy that it would create a supernova sized explosion.

If your goal is to obliterate an enemy planet with a supernova sized blast, and if you could get an earth sized planet up to 99.9998% you wouldn't have to aim it at the star in the hope of setting off a chain reaction that would fuse all the hydrogen in the star. Just have it hit anything, a planet or a star, within a relatively short distance, say 3-4 light years, and that will release enough energy to make a supernova equivalent explosion and cook the alien planet. If your goal is to kill your enemy with an atom bomb, and you have an atom bomb, then just drop your bomb. Don't go for Pinky and the Brain level of complexity and drop it on another bomb hoping to set it off.

Tuesday, October 31, 2017

Why the Neutron Star Collision was an Important Observation

I am going to take a moment to actually talk about astrophysics (I know, it's a shock! To actually talk about what I do).

Back on August 17th the LIGO gravity wave observatory detected gravity waves from the collision of two neutron stars. This was quickly followed by the detection of a gamma ray burst by the Fermi space telescope, and then a host of other observations from other telescopes. This event quickly became the most heavily observed single event in astronomy. There are several good general reviews of what happened that are very accessible to the average reader (NY Times, NPR, Veritasium, and one more in depth from, there is also a whole webpage about the detection with links to many papers).

So I won't go over the basics because you can get that from other sources, but I will talk about some of the more technical implications to the detection.

First, gold. Gold is very important in astronomy because it is very heavy and hard to make. Usually astronomers ignore the different types of elements, we are usually only concerned about hydrogen and helium. There is a joke in astronomy that the astronomer's periodic table of elements is the simplest one since we only have three elements, hydrogen, helium, and "metals". We refer to everything that is not hydrogen and helium and metals (that includes decidedly non-metallic elements like nitrogen, oxygen, carbon, and neon). It keeps things simple.
But when it comes to metals we are concerned with what we call metallicity, that is the relative amount of metals compared to hydrogen and helium. Because hydrogen and helium make up a combined 98% of the mass of the universe, on a cosmological scale everything else is just a rounding error. But on smaller scales (small, as in the size of a cluster of galaxies) the amount of metals becomes important. Except for a tiny amount of lithium, everything that is not hydrogen and helium was produced inside stars, one way or another.

When a star goes through its life cycle it will return a significant amount of mass back into the interstellar medium in the form of stellar winds. For a large star with an initial mass of 10-20 times the mass of the sun, the star may return 80-90% of its mass to the interstellar medium in the form of stellar winds, or a nova or even a supernova. So while a star may start out as almost entirely hydrogen and helium when it forms, the gas that returns to the interstellar medium will be slightly enriched with metals, that is, the metallicity will go up. This enriched gas that has been returned to the interstellar medium will go on to form a second generation of stars, which will still be almost entirely hydrogen and helium, but now with a tiny fraction more of metals. The process will repeat, and each time it does the gas will become more enriched with metals. In order to have enough metals that rocky planets such as the earth can form the gas must go through at least 20 star formation and enrichment cycles. To date, the highest metallicity ever observed in a star is about three times the metallicity of the sun.
Because of something called the nucleon binding energy only elements up to iron can be produced in the conventional way inside of stars. Anything heavier than iron needs to be produced in another way because it is en endothermic reaction and requires huge amounts of energy. Some heavy elements are produced in supernovas but there is a subtle problem with that. While there certainly is enough energy in a supernova explosion to produce the heavier elements, most of the mass that is blown off in a supernova is hydrogen. It would take an immense amount of energy and a string of complex, and highly improbably reactions to convert that much hydrogen into elements as heavy as gold and lead.

In nuclear physics there are two processes which can produce heavier elements, named the r-process and the s-process (unimaginatively the r and s stand for rapid and slow respectively). The s-process takes less energy and a much lower neutron flux, and can happen over long time scales. In the s-process heavy elements are built up slowly one neutron at a time, and allows for neutrons to decay into protons.

The r-process requires huge amounts of energy, and a truly astronomical neutron flux. A parent element is bombarded with a huge number of neutrons to make an extremely unstable isotope. The only thing keeping it from decaying into smaller elements is the rate at which neutrons are bombarding the nucleus. While a supernova has enough energy for the r-process, there is a distinct lack of neutrons to achieve the neutron flux necessary for the r-process to take place. It does happen, just not at a high enough rate to explain the amount of gold, lead, uranium, and other really heavy elements we observe in the universe. So while normal stellar processes can explain the carbon, oxygen, and nitrogen we see, and novas and supernovas can explain the amount of aluminum, iron, nickle, and zinc we see, neither of those can explain the amount of gold, silver, lead, and uranium we see.

This is where merging neutron stars come in. In the collision there certainly is enough energy for the nucleosynthesis to take place, and because there are two massive sources of neutrons being ripped apart, the problem of meeting the minimum neutron flux is solved. But up until now we had no hard confirmation of neutron star mergers, much less finding evidence of r-process production of heavy elements. It has been suspected for years, but only with the LIGO detection and the followup observations of the nova remnant has this been confirmed. With the detection of r-process reactions in the remnant of the merger we can now conclude that almost all of the gold, uranium and other very heavy elements come from neutron star mergers.

Below is an updated periodic table of elements that shows where each element comes from. Some come from more than one source, but you can see just how many elements were detected in the neutron star merger. The purple shows elements from neutron star mergers. It is much more than gold. This is why the detected merger was so important. It showed us where many of the heavy elements came from like gold, silver, lead, platinum, iodine, bismuth, tin, uranium, and many more.
From Wikipedia. You can see a larger version here.
For my own research this does not change what I am doing. While most very heavy elements come from neutron star mergers, most metals come from normal stellar process that are already accounted for in my models. The detection of the merger does not change the overall metallicity, but it does slightly change the relative ratios of the different metals. But this change is not significant enough to impact what I do. The overall metallicity is extremely important, but the heavy elements are still too rare to make any difference. This could be more relevant to those who work on rocky planet formation, and also nucleosynthesis in interstellar space. But my work does not get down to that level of refinement. I work on fairly large objects where individual stars, and even supernovas are below the level of my resolution. So while it is exciting, it does not affect my work directly, but at some point someone may provide a slight modification to some of the models that I use that may change a few of the minor outputs.

Thursday, October 12, 2017

Cheating at Connect the Dots: Failing to see the whole picture of Mormonism

Recently I was reading an article by a historian of Mormon history and it got me thinking about playing connect the dots. You see, playing connect the dots is easy... if you ignore most the dots. Consider the image below. It contains a number of different color dots. The question is, can you see the star in the dots?

How about now? Can you see the star? All I have done is emphasize a few important dots, and now all I have to do is connect the dots.

With the dots now connected the star is easy to see. Sure the dots aren't perfectly aligned with the star but that's only because I didn't take the time fully flesh out the star. With a little more work the misalignment could be fixed and the minor discrepancies could be smoothed over. Only those who are extremely nitpicky will complain about the misalignment, and that only distracts from my point that there is a red star in there.

So why am I talking about a manufactured star and connecting the dots? As historians approach history, by definition, they do not have all the facts, nor were they impartial observers of the totality of events. It is the job of a historian to take as much information as possible and attempt to build a coherent image of the past, or at least a single person or event, based on as much information as possible. Historian effectively play an extremely complicated game of connect the dots.

Their work is very similar to mine, I cannot go directly and observe stars and galaxies close up, nor can I see how they behave over millions and billions of years, so I am reduced to playing a very complicated game of connect the dots.

But in these games there are certain rules. It is easy to connect the dots of history into just about any natural progression of events after the fact. But in doing so we have to be careful not to ignore those things that undermine the point we are trying to make.

For example, I could use the Declaration of Independence, Shays' Rebellion, the Whiskey Rebellion, the Civil War, and the modern Tea Party to make the case that the founding principle of the United States is rebellion and distrust of government. I could probably write a fairly convincing argument, backed up with quotes from Thomas Jefferson, centered on the idea that to be American is to rebel.

But in order to make this argument I would have to ignore many other important founding principles of American society, such as representative democracy, constitutional government, separation of powers, personal liberties, and English common law. A similar argument regarding many other aspects of American society showing a coherent progression of history culminating in the latest manifestation of that defining characteristic. In fact most modern political positions attempt to do just that by tying current debates to what are assumed to be foundational principles.

Whenever we do this we run the risk of taking a few minor points of history and trying to superimpose a particular overall shape or interpretation to it. This is even more tempting when a few of the points are not just minor, but seemingly dominate the historical map (just like in the second image above). If I were to make the case that rebellion runs deep in American society I could make that argument quite persuasively using well known events such as the Revolutionary War, but to say that the United States is founded on rebellion ignores the fact that the United States was not organized from rebellion, but under a constitution formed out of democratic compromise. And it ignores the fact that the vast majority of American history did not involve rebellion, but democratic debate and the application of constitutional law.

So when I recently read an article by Grant Shreve entitled How Mormons Have Made Religion Out of Doubt, I thought about how he was connecting the dots. In his article Shreve makes the case that "The Church of Jesus Christ of Latter-day Saints was founded on the idea of an evolving scriptural canon." While this is certainly true because it is a large and important point in LDS thought, he connects this central idea to other contemporary and historical events in such a way that he is effectively playing connect the dots by blatantly ignoring other large and just as important parts of LDS thought.

Shreve uses the story of James Colin Brewster to try to show that "If the church seems to have lost its way, a new revelation may be just around the bend." and that "The legacy of James Colin Brewster and numerous other Mormon dissidents lives on in the Remnant, a diffuse international movement of disaffected Mormons." This Remnant organization that Shreve sees as a natural product of LDS though, "answers the doubts many Mormons harbor by offering more revelations and additional scriptures."

Thus Shreve takes the idea of an open canon and makes the case that these historical and contemporary dissident movements are simply a natural extension of LDS thought. But in making this argument Shreve ignores other important points just as integral to LDS thought. Points such as priesthood authority and keys, and church hierarchy. He does briefly acknowledge these things, but they are not mentioned as foundational and fundamental to LDS thought.

While Shreve uses the concept of the open canon to make his point he completely ignores the actual content of that new canon. Just as someone might argue that the United States was founded on rebellion, they would have to ignore the fact that the United States is actually founded on constitutional democracy. They would have to ignore the bulk of US history and law.

The concepts of priesthood authority and church hierarchy are acknowledged by Shreve, but he views these not as foundational but later additions where Joseph Smith sought to "consolidated revelatory power in himself and a select cadre of elites." Furthermore, the structure of the church is only described in negative terms such as "elaborate", "complex" and made of "byzantine bureaucracies" which "disrupted the mystical and communal experiences of Mormonism’s salad days."

Thus the point that Shreve is trying to make is that the impulse to have new mystical revelations is the natural state of Mormonism, with new prophets popping up "if the church seems to have lost its way" to provide new books of revelation. Thus, based on this reasoning, the Remnant movement is truly following the principle started by Joseph Smith, and the elaborate, complex and byzantine bureaucracies of the modern LDS church only serve to stifle the true expression of Mormonism.

Unfortunately this game of connecting the dots blatantly ignores much of the rest of the content of Mormonism. The "consolidation of revelatory power" was not a late development imposed by Joseph Smith, but rather was an early and foundational principle. If we ignore this fact, we fail to see the whole picture of Mormonism.