Saturday, March 31, 2018

A Fundamental Disconnect in What is Important

I spend a lot of time reading a variety of views on the Church and Mormonism in general. Every so often some online community of saints or former saints who are critical of the Church whip themselves into a moral panic. They talk about all the problems with the Church and bring out a laundry list of things that must be talked about or things that every Church member should know. Whenever I spend too much time listening to the tinkling cymbals and sounding brass I listen to what the Church leaders are actually concerned about and what they are talking about I notice a fundamental disconnect between what the Church critics think is important and what Church leaders think is important.
I was reminded of this when a visiting General Authority spoke in my ward a few months ago, and again last General Conference, and again today.

Monday, March 26, 2018

Brigham Young and the Aliens on the Sun

The title for this post is something you would expect from a cheap pulp science fiction novel rather than a serious discussion about religion. But it was a topic that I was reminded of recently. While by no means is this a common criticism against Brigham Young, or the Church in general, but it is something that is occasionally brought up in mocking comments online.

Usually these comments take the form, "Brigham Young believed people lived on the sun! That's ridiculous! It's proof he was an idiot/fool/ignorant and we can't trust anything he said."

Did Brigham Young actually believe that people lived on the sun? And if he did what are we to think of that based on our current understanding of the universe?

So did Brigham Young believe that people lived on the sun? On July 24th, 1869 Brigham Young was speaking at a Church conference and as part of his un-prepared remarks he made the following statement:
"So it is with regard to the inhabitants of the sun. Do you think it is inhabited? I rather think it is. Do you think there is any life there? No question of it; it was not made in vain."
That seem to make the case very straight forward. It's unambiguous. He said he believes it. We know from modern science that people do not live on the sun. Brigham Young was mistaken. Case closed.

Well that was a short blog post.

Except, as my favorite comedy troupe once said, context is everything.

If we are to consider the context we must first look at the general scientific context in which Brigham Young made that statement, and second consider the context within the rest of his discourse. Both contexts are very illuminating.

So what was the scientific context at the time? If you went to an "expert" or and "authority" during the mid 1800's and asked them if the sun, moon, and all the planets were inhabited what answer would you get?

The idea that the planets of the solar system are inhabited, referred to as the "plurality of worlds", is an idea that suddenly gained wide spread attention among astronomers and philosophers in the early 1700's. It had been discussed before then but it was only considered a theological question before then. But with the spread of Newtonian ideas, questions about the exact nature of other worlds suddenly became very important.

Previously the realm of the sun, moon, and stars was considered to be entirely distinct form that of the earth. The laws of nature were different "up there". The stuff that made up the heavens was just different from the stuff that made up the sphere of the earth. But the Newtonian revolution introduced the idea that the same laws that governed the earth also governed the heavens. That radically altered the way people perceived the sun, moon, planets, and stars.

New and improved telescopes helped us understand that the planets were spheres just like the earth. This precipitated the idea that not only were the sun, moon, and planets governed by the same laws, but in many ways they were just like the earth. This meant that they had oceans, seas, continents, plants, animals, intelligent beings, just like the earth.

These ideas were speculative and did not appear in any major textbook on astronomy of the time, but they were discussed and mentioned among scholars and other well read men, much in the same way the idea of wormholes and parallel universes are viewed today. They are not strictly scientific, and they do not appear in major textbooks on astronomy. But while some scholars think they do not exist, others speculate that they are possible, and they do pop up in popular literature and news.

While the concept of plurality of worlds (i.e. the sun, moon, and planets inhabited by intelligent beings) was never a major point of scientific inquiry it did show up in the writings of some influential individuals.

On April 25, 1756 John Adams wrote in his diary,
"Astronomers tell us, with good Reason, that not only all the Planets and Satellites in our Solar System, but all the unnumbered Worlds that revolve round the fixt Starrs are inhabited, as well as this Globe of Earth."
In the late 1700's the idea of the plurality of worlds was considered to be so well established by reason that Thomas Paine writing in The Age of Reason used the idea to criticize Christianity saying,
"to believe that God created a plurality of worlds, at least as numerous as what we call stars, renders the Christian system of faith at once little and ridiculous, and scatters it in the mind like feathers in the air."
By the early 1800's famous astronomer William Herschel wrote:
"The sun, viewed in this light appears to be nothing else than a very eminent, large, and lucid planet, evidently the first, or in strictness of speaking, the only primary one of our system.... Its similarity to the other globes of the solar system with regard to its solidity, its atmosphere, and its diversified surface; the rotation upon its axis, and the fall of heavy bodies, leads us on to suppose that it is most probably also inhabited, like the rest of the planets, by beings whose organs are adapted to the peculiar circumstances of that vast globe. Whatever fanciful poets might say, in making the sun the abode of blessed spirits, or angry moralists devise, in pointing it out as a fit place for the punishment of the wicked, it does not appear that they had any other foundation for their assertions than mere opinion and vague surmise; but now I think myself authorized, upon astronomical principles, to propose the sun as an inhabitable world, and am persuaded that the foregoing observations, with the conclusions I have drawn from them, are fully sufficient to answer every objection that may be made against it." (Emphasis in original.)
Here we see an interesting argument from Herschel. He states that because the sun behaves just like the other planets, it must be inhabited just like the other planets. This argument, he believes, is so obvious that it is "fully sufficient to answer every objection that may be made against it." This is a good example of a logical conclusion stemming from the Newtonian revolution.

A central assumption of the new wave of science was that the same things we observe on earth can be observed in the heavens. Because the sun had many of the same characteristics as the earth it was natural to assume that the sun was inhabited just like the earth.

This idea was taken a step further in 1837 by a popularizer of science named Thomas Dick who used the population density of England to estimate the population of the Earth, the moon, the planets along with all of their moons including Saturn's rings, along with the sun. His calculations assumed that all the sun, planets and moons, including Saturn's rings, had continents and oceans. Again this was based on the assumption that because the heavens were like the earth, all things we observe on earth were in the heavens.

But how could these learned men think this? Didn't they know that the sun was a giant ball of hot gas and quite impossible of supporting life? Didn't they know that the moon had no atmosphere?

It is easy for us to look back with hundreds of years of scientific observations and incredible advancements in telescope technology and say, "Obviously...." But back then it was not so obvious. It had only been a few hundred years since astronomers realized that there were mountains on the moon. And the discovery of mountains on the moon confirmed the growing assumption that the heavens were just like the Earth. Hence the "seas" (Mare) on the moon. It was not until the end of the 1800's that telescopes were advanced enough to cast serious doubt about the existence of oceans and even atmosphere on the moon.

But what about the sun? Why would astronomers think that the sun could be inhabited?

While astronomers certainly worked out that any life on the sun would be subject to intense bright light, this was right when the modern study of spectroscopy was just beginning. Absorption lines were first discovered in 1814, but it was not until 1859 that these absorption lines were first associated with specific chemical elements. That is, it wasn't until 1859 that scientists could even discuss, scientifically, the composition of the sun. It was at the same time, and mostly by the same scientists, that we began to understand the concept of black body radiation. This allowed us to measure the temperature of the sun, and is to this day the exact same method we use to measure the temperature of all stars.

The Newtonian idea that the heavens were just like the Earth had only been confirmed up until then. And while this added even more evidence to the list of reasons why the heavens were just like the Earth, it was the beginning of the end for that assumption. Unfortunately for Herschel, his argument was not the final word, and his reasoning could not "answer every objection that may be made against it."

Over the next 100 years we made many more discoveries that greatly undermined the idea of the plurality of worlds. The first to go was the idea that the sun and moon were inhabited and then later the dream of men from Jupiter and Saturn. The Martians and Venusians were the last to go. But before they did they inspired a generation of science fiction writers.

While we look back on science fiction from the very late 1800's and the 1900's as quaint, simplistic, and "Obviously they got it wrong." We forget that they were not writing fantasy. They were writing science in a fictional setting. They were dealing with the possible, and not with the imaginary.

In 1952 the scientist and author Isaac Asimov began a series of books about a character named David Starr who had many adventures all over the solar system. In the books he travels to Mars and meets Martians, and travels to the oceans of Venus. Later when the books were collected into an anthology in the 1970's Asimov wrote an introduction in which he apologized for getting the science wrong. It was only after our probes to Mars, Venus, and the other planets that the idea of (current) life on the other planets of the solar system died out as a matter of science.

That these old conceptions of life on the moon, the planets and the sun were wrong is only obvious now, and as such are frequently put on the same shelf as fantasy. But they were once a matter of science.

While these ideas never rose to the level of "settled science" they were a matter of discussion and debate, and at the end of the 1800's they captured the imagination of a generation of authors. But with more knowledge a great portion of the speculation passed from science into fantasy.

So how does this affect our view of the past? What about those who thought that the sun, moon, and planets were inhabited just like the Earth? Does this misconception invalidate everything else they said and did?

Does Uranus some how cease to exist and do telescopes cease to function because William Herschel thought the sun was inhabited? Is all past and future work of the Royal Astronomical Society invalidated because their first president once had an idea that he thought was sound, and later was shown to be implausible? Are the Declaration of Independence and US Constitution invalid because John Adams thought the moon was inhabited? Is Washington DC somehow not the capital of the US because John Adams heard the arguments of the astronomers of his day and was convinced by them? Just because someone though something was common sense, and it later turned out not to be so, does that upend all things that we know?

A similar set of questions can be asked about Brigham Young. Was he somehow ignorant to accept the statements of astronomers of his day? Does the fact that he did not know everything somehow invalidate his work to lead the Church? Does God cease to exist because a prophet was free to exercise his own mind in trying to understand the universe? Do we expect God to remove the agency of His servants just to spare the bruised egos of a few doubting critics 150 years after the fact?

In our approach to what we know and what we do not, and how we grow in our knowledge and understanding, we must remain humble. There were certain things, the Constitution, the science of astronomy, that were not invalidated because someone connected to them once thought something that later proved to be wrong, nor are the organizations they helped found called into question because they operated on incomplete knowledge.

This realization, that men of science can be mistaken, is especially relevant because it was precisely the point of Brigham Young's talk back in 1869 when he mentioned the men in the sun. If you read it and consider the context you will find very little to criticism and perhaps more to think about.

Sunday, February 18, 2018

The Fundamentals of Philosophy

This is by no means a comprehensive introduction to philosophy, but it contains the basics. This is not what you would get by taking an intro philosophy course, mostly because at no point in any philosophy course would you typically get an introduction like this. These topics would be covered but never in a simple systematic way.

If physics is the study of how things move, and how the universe works, then philosophy is the study of how we think, and how we view the universe.

There are three main branches of philosophy: Metaphysics, Epistemology, and Ethics.

Metaphysics deals with how we fundamentally understand how the universe works, and what makes up the universe. This sets what we consider to be "allowable". This includes things like whether matter is made up of atoms, strings, the four elements, or plum pudding. But it also includes how we view consciousness, the mind, and how we think.

If you want to know the metaphysics of a person then ask them to define, or describe consciousness. The answer they give will not tell you anything about what consciousness actually is, but it will teach you about their metaphysics.

Metaphysics can be broken down into several (sometimes non-exclusive) broad categories. Dualism is the idea that there are two (or dual) components to reality. The material, or physical world, and the world of "the mind" or spirit, or rational thought. Monism is the idea that there is only one nature and both matter and the mind derive from the same source. Materialism is the idea that everything is the result of the fundamental laws of physics and the interactions of particles. Materialists deny that "the mind" is a separate thing apart from the firing of neurons in the brain. Materialists are by definition monists, but not all monists are materialists. One example of non-materialist monists are Mormons. Classical Christianity, Islam, and a few other worldviews are fundamentally dualist.

Epistemology deals with how we know, and know about the world. Perhaps Professor Truman G. Madsen, who spent five decades dealing with philosophical questions, put it best when he said, "There are really only five main modes that have been appealed to in all the traditions, philosophical or religious: an appeal to reason, an appeal to sense experience, to pragmatic trial and error, to authority—the word of the experts—and, finally, to something a bit ambiguous called 'intuition.'."

Science falls squarely under the umbrella of epistemology. If anyone gets into a discussion about what science fundamentally is, it ultimately rests on an endorsement of a particular epistemology, and nothing else. On a fundamental level, science does not have a preferred metaphysics* or ethics.

Logic is a subset of epistemology, and is not synonymous with it.

Ethics deals with what we value. Your ethics determines how you interact with other people and animals, and occasionally things. This area of philosophy is usually the messiest and most contentious.

Ethics is strongly related to Aesthetics, since what we value is generally what we find beautiful, and what we enjoy is what we value.

A huge portion of religion deals with ethical questions.

These three, Metaphysics, Epistemology, and Ethics are all related to each other, and mutually supportive, and occasionally at odds with each other. That is, our metaphysics determines our epistemology and ethics. While our epistemology informs us of our metaphysics and ethics, while our ethics reveals our metaphysics and epistemology. One cannot have a particular metaphysics without a corresponding epistemology, nor ethics. Because once one is set the others will automatically be defined.

The short descriptions I have given above are by no means exhaustive, nor are the examples I gave all there is. The key is to know that there are these three parts to philosophy, and they are interconnected, related, codependent, reinforcing, and co-determining. They are also by no means static. The particular metaphysics, epistemology, and ethics of someone will definitely change over time.

Also it is possible, and very likely, for someone to have a particular metaphysics, epistemology, or ethics, and not be able to explain or articulate their thought, any more than most people could give a complex breakdown and accounting of their diet, including any and all nutrients. It is also possible to have the particular implementation of one of the three be incompatible with the others (people who smoke may also exercise).

But generally the position of any one of the three will determine the other two. The interrelationships are complex and usually take a great deal of effort of understand.

Most changes in someone's philosophy are subtle and almost imperceptible, but if there is a major shift in one of the three then that will precipitate a reevaluation of the other two.

Doing philosophy correctly can help uncover your own particular metaphysics, epistemology, and ethics. It can show how the particular implementations may be incompatible. For example, if you really believe that everyone is created by God (metaphysics), then that should determine how you treat them (ethics).

We may not realize it but our ethics (and by extension our metaphysics and epistemology) are revealed by our aesthetics. Think about what movies, TV shows, books, stories, blogs, or news articles you like to consume. The kinds of entertainment we like, or the fictional characters we identify with, act as a litmus test for our ethics.

What art is hanging on your wall? Is it realistic, like photographs, or hyper realistic paintings? Or is it abstract? What is the subject matter? All these things can reveal how you fundamentally view the world, and how you think about knowing the universe.

Just as asking about how one views consciousness will reveal their metaphysics, what one surrounds themselves with, or their aesthetics, reveals their ethics, and ethics is codependent on their metaphysics and epistemology.


*I stated that science does not have a preferred metaphysics. That is not entirely true. Because science, as an epistemology, requires a corresponding metaphysics and ethics. It's just that the metaphysical and ethical demands of pure science are minimal. Most pronouncements regarding what we "should do" because of science, actually have nothing to do with science as an epistemology. When people make an appeal to "Science", or Science™, they are always, without realizing it, bringing a particular metaphysics and ethics along with them. Their assertions don't actually have much to do with the epistemological method known as science.

Sunday, January 14, 2018

Extreme Skepticism is Not Scientific

Many years ago I was in a research group meeting where we were discussing some astrophysics related idea. One of the other graduate students, referencing a particular paper under discussion, made the comment that some feature observed by astronomers is "apparently" caused by a certain type of star. My PhD advisor stopped the grad student right there and asked, "Apparently? What else could it be? There is nothing else that it could be."

He then went on to make the point that in science we are taught to doubt established explanations, but only if we have a reason to doubt it and have an alternate explanation. In this case he explained that expressing skepticism of the commonly accepted explanation was not warranted because we did not have an alternate explanation. The standard explanation did not have any "apparent" problems, it fit with everything else we know about astronomy, stars, and galaxies. So the impulse to maintain a skeptical attitude was not helpful unless we were willing to provide an alternate explanation. Science was about increasing our understanding, and skepticism for skepticism's sake does not do that. He told us that if we are going to doubt the established explanation, even by throwing in a seemingly innocuous "apparently", then we should have a better, alternate explanation.

So how does this fit with the popular conception of science. Typically science is portrayed as constantly asking questions, doubting previous conclusions, and maintaining a skeptical attitude. As one person put it, "science without doubt isn't science at all."

It is easy to find a plethora of quotes about how science doesn't go anywhere without people doubting, asking questions, and throwing out old ideas. Famous science communicators will proudly proclaim that all the old ideas we once thought to be true have now been shown to be false, and we may eventually overturn everything we now think to be true.

In science classes we emphasize the importance of asking questions, being critical, demanding rigor, and not accepting an explanation "just because". But is that how actual scientists do science? We may say that it is, but when it comes down to it scientists never actually "question everything". They only try one thing at a time, and even then they don't throw it out. They look for an explanation within established parameters. Even Thomas Kuhn's paradigm shifters did not "question everything" and throw out all "false ideas of the past." They worked within a larger epistemological approach that had established norms and rules that they did not try to undermine.

What gets lost when popular science communicators tell the stories of Galileo, Newton, and Einstein is that they weren't right because they questioned fundamental assumptions. They were right because their explanations were better than the alternatives.

Galileo wasn't right because he questioned the established science of the day. He was right because his explanation fit with what others took the time and effort to measure and observe. In some cases Galileo wasn't even "right" until hundreds of years later.

Einstein wasn't right because he "thought outside the box" and questioned the established wisdom. He was right because hundreds of other physicists conducted experiments to check if his theories fit the data better than other possibilities. Some of these tests were at first inconclusive, and had to be redesigned to make the necessary measurements.

When it comes down to it, always questioning things, and never accepting explanations and answers really isn't science. It's just ignorance. Maintaining a constant stream of skepticism is not conducive to science. Offering alternate explanations is. Just doubting is not the stuff of science. You must have a reason to doubt. The received wisdom, or standard explanation must fail in some way. Science happens not when we try to break things, but when we try to fix things that we find to be broken.

Sunday, December 17, 2017

My Favorite Logical Fallacy: The Suppressed Correlative

In a post that I wrote last year I noted how some paradoxes could be resolved if you just considered how there has been a subtle shift in the definition of a word. In the case of the heap paradox the word heap is inherently vague and has no exact number. But the paradox is created when the definition of heap is given an exact number, that is, the word is redefined to include additional information that was not present in the original definition. With the heap paradox the definition is narrowed, unknown to perhaps everyone involved, in the course of the discussion, thereby creating the paradox. It is precisely a paradox because the key word, term, phrase, or idea is modified without the knowledge of those involved.

Redefining words is not necessarily a problem. It is only a problem if confusion and misunderstanding results from the redefinition, or if by redefining the word something in our understanding is lost. The redefinition of words should only help increase understanding, not destroy it.

So if the heap paradox relies on narrowing the definition of a vague word, what about the opposite, extending or broadening the definition? The opposite falls under a group of fallacies related to what is called the correlative. For every defined word there is a correlative, or everything that is not covered by the definition of that word.

For example, the word cat refers to a type of four-legged, furry animal that eats meat. When I use the word cat what I mean by that is generally understood. This includes house cats, mountain lions, tigers, lions, lynxes, panthers, and all kinds of cats. Despite the word cat being well defined there is inherent fuzziness to what constitutes a cat. For example, should a civet fall under the definition of a cat?
A civet. Image from Wikipedia.
What about a mongoose?
Pictures of mongooses. Image from Wikipedia.
At this point it is stretching the definition of the word cat. There definitely are things that are cats and there are things that are not cats, like dogs, horses, rocks, and rivers. Between the two, cats and not-cats, there is a grey area where it is debatable whether or not the word cat applies. The ambiguity at the edge of the definition is not a problem, that is just the nature of language, but it is very clear that there are cats and not-cats, even if the dividing line is not always clear.

With the word cat there is a specific definition that is understood by everyone. Thus there are things that are cats. The correlative to cats are not-cats, or everything that is not a cat. If we take the definition of the word cat and broaden it so that it includes all four-legged, furry creatures then it includes things such as civets and mongooses, but also dogs, cows, and horses. By making the definition too broad we can include not-cats in the definition of cat. This is referred to as suppressing the correlative.

We can change the definition, or make it so broad that it begins to include things that should be part of the correlative. If taken too far we can shrink, or suppress the correlative to the point that the original definition becomes meaningless. Or in other cases we make the definition so broad that it subsumes the definition of another word, such as extending the definition of cat until it is practically synonymous with the word mammal. So by suppressing the correlative we include things in the definition that are not supposed to be there, and in some cases the definition is extended to the point that we already have a word for the broader definition.

Talking about cats and not-cats may seem a little ridiculous, and just a bit too theoretical. So are there real examples of someone suppressing the correlative? Yes! This is a real thing. People do it all the time. You would be surprised at how often it comes up. When I talk about the definition of the word cat and extending it so that it includes things like dogs and sheep it is tempting to say that no one would do anything so ridiculous. But they do. They do it all the time.

Real example #1 of suppressing the correlative: Bad definitions of socialism.

Recently I was having a discussion with one of my students and he casually stated that when it comes down to it any form of taxation and government spending is really just socialism. This idea has even been summarized in a meme that made the rounds last year.
A good example of suppressing the correlative. Relies on a bad definition of socialism.
According to this anything done by the government is a form of socialism. There are many people who would severely object to suppressing the correlative and calling a dog a cat, but would not realize that the above meme commits the exact same fallacy.

There are types of governments that are socialist, and there are governments that are not-socialist. What the above meme does, and what my student did, was to take the definition of socialism and stretches it beyond its original definition until the concept of not-socialism doesn't exist. Socialism by definition is when the means of production, distribution, and exchange is controlled by the government in a democratic system. This definition puts boundaries on what socialism is and is not. Under this standard definition of socialism fire departments, public schools, and highways do not fall under the definition of socialism (social security is in that grey area).

If we extend the definition of socialism to include these things then it erases the distinction between a socialist vs. non-socialist government. In the case of my student who took the definition of socialism so far that he defined it as all taxing and spending, then the definition became entirely useless because we already have a word for that, government. Because socialism is a form of government, that means there are governments that are not-socialist. If you extend the definition of socialism to essentially mean government, then you have included not-socialism in the definition of socialism, and have suppressed the correlative.

Real example #2: Taxation is theft.

Among strict libertarians there is a saying that "taxation is theft". Without going into too much detail this sentiment relies on the fallacy of suppressing the correlative. It takes the definition of theft and broadens it in such a way that it is rendered useless. Whatever their objections are to taxes and government in general, I would recommend that libertarians stay away from this particular logical fallacy. It never helps your cause to use particularly bad logical fallacies, because the type of people you will recruit to your cause will be those who do not mind, or do not know that they are using logical fallacies. It does not make for a rational movement.

Real example #3: Fake news.

Real fake news (yes that is a thing) is a real problem. But a certain politician and his copycats have taken to calling everything they don't like fake news. This was easy to do because fake news had a very vague definition to begin with, so unlike something like the definition of cat, it was very prone to be redefined. Unfortunately they have redefined it in such a way that the include not-fake news under the umbrella of fake news. And based on things I have seen shared on Facebook people actually believe this fallacy. See my comment about rational movements above.

Real example #4: A random discussion on Facebook about "Science".

Recently I came across this random discussion on Facebook about what constitutes science. One of the people involved (who I will refer to as Charles) made the error of suppressing the correlative and got called out on it (by someone I will call Daniel). What I find utterly fascinating about this discussion is just how beautifully "Charles" suppresses the correlative and is blissfully unaware of just how far he has gone. There was a lot more going on in the discussion but I will pick out the important parts below:
Charles: What's the alternative to science in obtaining knowledge? If someone believes there are better (or even alternate) methods for obtaining knowledge, the last thing I would say is "no there aren't". I would simply ask, "What are they?" 
Daniel: Personal acquaintance is a pretty good alternative. So is historical research. Mathematics isn't bad, either. Hearing stories. Listening to music. Reading novels. None of this is science. 
Charles: these to me are all science. 
Daniel: Then all true knowledge really IS science, just as -- if we define all sports as baseball -- all athletic activity is . . . cue drumroll . . . BASEBALL! (QED.) 
Charles: To me science is simply methodological measurement. The lack of methodological control in one's measurement makes something less scientific, but it also makes it less reliable.... "What was the size of the Roman army in 100BC?" is just as much of a scientific (measurable) question as "How are you feeling today?" 
Daniel: To suggest that Roman history is "science" is to broaden the meaning of the term "science" in a very unhelpful way. "Methodological measurement" has little or nothing to do with historiography, art criticism, understanding poetry, human acquaintance, music appreciation, or any other of scores of distinct fields of human knowledge. I'm reminded of Moliere's "Bourgeois Gentilhomme," who was astonished to discover that, all his life, he had been speaking PROSE! Most people, watching a "Jason Bourne" movie or reading a Charles Dickens novel, would be astonished -- and quite properly so -- to be told that, in doing so, they were doing "science." 
Charles: As I already mentioned, to the degree that someone is not being methodological with their categorization of information, they're NOT doing science. But I also fail to see how the lack of methodology in their information categorization can rightly be called knowledge.... Certainly there are other human activities that are not science. Without question.... Science is (and can only be) the only reliable method for obtaining knowledge. When we talk about music appreciation, it's either any random opinion being equal to any other, or it's based on a methodological categorization of information pertaining to music. I'm assuming people who earn degrees in this would argue true musical "knowledge" looks more like the latter. 
Daniel: Just as, in my view, you stretch the meaning of "science" so far as to compromise its usefulness, your overly board definition of "measurement" threatens to make the term useless. Anyway, in the same spirit, I might judge the essence of science to be close and disciplined observation -- rather like art appreciation. Thus, I think I'll argue that all true knowledge is art appreciation.
The essence of the fallacy, as pointed out by "Daniel" is that it takes a definition and stretches it to the point that it is no longer useful. The word was defined originally because there was a need to distinguish between A and not-A. When that necessary distinction gets washed out, and it results in a loss of understanding or confusion then it is a fallacy.

As I mentioned in a previous post, the purpose of recognizing logical fallacies is not so that you can go out and point them out to those who employ them, but rather so that you do not fall into that trap. Once you learn how to recognize this fault in thinking found in other people's arguments, you may be able to find it in your own thinking. Some other common places this pops up: the definition of faith, the definition of religion (especially those who try to call atheism a religion), and the definition of "theory".

PS: My observation that it is not a good idea to point out logical fallacies directly to other people (i.e. "Charles") is still true. I tested it, again, and yes I got the same result as before.

Sunday, November 12, 2017

Parables and Big Fish: Rereading Jonah

In Church lessons when we talk about the parable of the Good Samaritan the discussion centers on what we learn from it, and how it applies to our lives. Sometimes the discussion centers on why Jesus chose a Samaritan to be the protagonist in his parable, but the question of historical or factual accuracy never comes up. In talking about the parable we do not ask if there really was a historical Samaritan who stopped to help a man who was left for dead on the side of the road. Neither do we argue that a Samaritan would never actually stop to help a Jew, nor do we question Jesus for having the priest and the Levite walk by without stopping. Those questions in the story do not distract us from the point of the parable which is that we must treat everyone, even people we may not like, as our neighbor.

We do not mistake the parable for an actual story that must be analyzed for its historicity or whether or not the characters were based on real people. Even though the story is not historical we do not consider it to be untrue. We recognize the purpose of the story is not to convey history but to teach a moral.

This sets the parable of the Good Samaritan apart from some of the other stories in the New Testament. For example the story of Jesus’ baptism is not presented as a story with a moral, but as a historical event. With this story it is appropriate to discuss where exactly it took place, even to point out that it happened because there was much water there. For the story of Christ’s baptism it is appropriate, and probably necessary, to consider the historical context, while the parable of the Good Samaritan can be told independent of the historical context.

In an interview with LDS Perspectives Podcast Benjamin Spackman talked about the concept of genre in the Bible. He made the point that the Bible is a collection of many different stories, prophecies, teachings, laws, sermons, and histories. In essence it is a mix of many different genres and while it may be easy to separate some of the different genres, sometimes we can mistake the genre of a particular book or passage in the Bible and that can lead us to misunderstand the Bible.

If we were to focus our discussion of the Good Samaritan on whether or not it was historically accurate we would miss the point, that it is a parable, or a morality tale. If we were to talk about the baptism of Jesus as only an inspirational metaphor then we would be missing the obvious indicators of it as a historical event.

While some things in the Bible are clearly labeled as a parable or a prophecy or history, there are some things that are not clearly labeled. It is these things that can sometimes cause confusion. If we treat something as literal history when it is a parable, teaching tool, or a social commentary then we run the risk of looking beyond the mark and lose the intent of what we find in the Bible. If we make this mistake then we will go looking for historical events that never happened. We might get caught up in a pointless debate about whether or not there actually were any Samaritans who traveled on the road from Jerusalem to Jericho, and miss the point entirely.

While it may seem silly to debate the historicity of the Good Samaritan, there are other stories in the Bible that were written to teach a moral and provide social commentary, and not be literal history, but are unfortunately interpreted as history. One such story is the story of Jonah.

For many members, discussions about Jonah center on analyzing the motivations and actions of him as a real man, as well as whether or not someone could actually survive for three days in the belly of a whale. That is, the central concern that we have when we discuss Jonah is the historicity of the story. Sometimes we are more concerned with confirming the literal fulfillment of an apparent miracle than we are of learning the central message of the story. While Jonah was a real person, the actual book of Jonah never presents itself as a literal history, and there are some subtle things about it that set it apart from all the other writings of the prophets.

To give Jonah a little perspective we have to realize that Jonah, the historical man, lived less than 50 years before the Northern Kingdom of Israel was destroyed by Assyria, which capital city was Nineveh. The book of Jonah was not written by Jonah, and was most likely written after Israel was destroyed by armies from Nineveh. So whoever wrote the book of Jonah was making a somewhat ironic point by having Jonah go to Nineveh. In the story everyone, including the pagan sailors and all the illiterate citizens of Nineveh, obeyed God's commands. Everyone, that is, except the Israelite. The one who is supposed to be the most faithful and chosen of God is consistently less faithful than the illiterate (i.e. does not read the scriptures) and superstitious sailors and citizens.

These things, and a few others, mark the story of Jonah as a parable or a social commentary. It is not trying to pass itself off as literal history. For some this would seem to undermine the story of Jonah, but recognizing the genre of the Book of Jonah no more undermines it than recognizing that the story of the Good Samaritan as a parable destroys its lessons and power to teach. But by understanding it for what it is, we can get over the big fish and understand the message of Jonah.

Sunday, November 5, 2017

Sci-Fi Sanity Check

A friend wrote me an email a few days ago asking for a sci-fi sanity check. He had been reading a series of sci-fi books where some interesting physics was used to destroy a hostile alien race. He was wondering if the the methods used were credible and could actually be used in a hypothetical space battle. Below are his questions followed by my responses.

Question 1:

"First, they had a fleet of ships fire nuclear weapons while travelling close to the speed of light towards the battle. The idea was that the wavelength of the energy from the blast would experience an intense doppler effect, and hit the enemies at an incredibly high frequency. This gave the weapons far more devastating effects than would have otherwise been possible."

Response 1:

This question is one that I looked at and said, "Oh, there is an easy answer to that." But the more I thought about it the more complex it became. So I went and asked a real nuclear physicist in my department and we both thought about it for a while and concluded that the issue is irrelevant anyway, though there are some interesting physics questions underneath that made us scratch our heads, but none of which would make a better weapon.

The first problem is a misconception of where most of the energy in a nuclear blast goes. When an atom bomb goes boom it does release a significant amount of gamma radiation. That is just something that happens. When the uranium or plutonium fissions it will release a gamma ray, which is very energetic as far as electromagnetic radiation goes, and very dangerous, but the vast majority of the energy actually is carried away by the fission products. That is, the daughter isotopes of the nuclear reaction carry most of the energy in the form of kinetic energy. The gamma radiation will fry you, but the thing that actually creates the blast is the huge number of particles with huge kinetic energies that will rip you apart. The gamma radiation will ionize the atoms in your body, but the thing that will literally blast you to smithereens is the fissioned material with huge amounts of kinetic energy.

The gamma radiation will only carry away like 10% of the total energy from a nuclear blast, the rest is in the kinetic energy of the atoms after they split apart.

So if you accelerated it to high speeds the only part of the blast that would be doppler shifted would be the radiation. The particles that make up the most dangerous part of the nuclear weapon would not be doppler shifted. So the radiation (gamma rays) from a nuclear weapon that has been accelerated to the near the speed of light would get columnated, doppler shifted, and would be more energetic in the direction of motion, but you would have to be going at like 99.9998 % the speed of light before the doppler shift would make the radiation that much more dangerous than it already was. For example if the bomb was traveling at 90% the speed of light then it would only raise the energy of the gamma radiation by a factor of 4. To make a significant difference you would literally need to be going 99.9998% the speed of light. At that speed that energy of the photons would be shifted by a factor of 1000, but only on an extremely narrow beam directly directly in front of the blast. A deviation by as little as 0.5 degrees would decrease the doppler shift by a factor of 10 (an overall increase of only a factor of 100). So aiming would have to be extremely precise, which means the detonation would have have to occur right on target or any doppler advantage would be lost.

But the main issue with this scenario, and the thing that makes everything I discussed above pointless, is that at relativistic speeds the kinetic energy far exceeds any possible yield from the atom bomb. For every kilogram of plutonium there is a theoretical total yield of about 20 kilotons of TNT, which comes to about 8x10^13 joules of energy. A kilogram of lead moving at 10% the speed of light has kinetic energy of about 5x10^14 joules, or almost 10 times as much energy as you would get from an atom bomb.

If you take that up to 90% the speed of light, 1 kg of lead would have kinetic energy of about 1x10^17 joules, or about 20 megatons of TNT, which is about the yield of the largest hydrogen bomb the US ever tested. At relativistic speeds the kinetic energy of the case that holds the bomb would have orders of magnitude more energy than anything the atom bomb could produce. So accelerating an atom bomb to relativistic speeds in order to take advantage of the doppler effect is kind of like strapping a stick of dynamite to the front of a semi truck traveling at 100 mph. It's not the dynamite that will kill you.

The key is that at relativistic speeds everything has such high kinetic energy that normal stuff like atom bombs are tiny in comparison. Just getting a hunk of metal up to relativistic speeds would make it much more dangerous than any atom bomb.

Question 2:

"The second thing they did was accelerate a barren planet to a significant fraction of light speed (I recognize there are issues with that too, but they never tried to give a scientific explanation for doing that) and send it through the star where their adversaries lived. The result of the high speed mass applying high pressure and force as it passed through was to cause an increase of fusion (because of the mass pushing stellar material together really hard) which released a tremendous burst of additional energy, causing it to become a supernova."

...Yes? It is conceivable. The star would have to be pretty big to begin with, but in order to get a planet to do that it would need to be going really, really, really, really fast. Like 99.9998% the speed of light. In order to get the level of pressure needed to make that happen you would either need a really big planet (basically another star) or an earth sized planet traveling at 99.9998% the speed of light.

But then we run into the same problem as before. At that speed the planet would have a HUGE amount of kinetic energy. We are talking about 10^44 joules of kinetic energy. To put that in perspective, that is the same amount of energy as a type Ia supernova. So yes, crashing a planet into a star at 99.9998% the speed of light would probably cause the star to undergo a massive amount of fusion setting off a supernova. But in order to do that the planet would need to have kinetic energy equivalent to a supernova to begin with. It's kind of like dropping an atom bomb on an atom bomb in the hope of getting the second atom bomb to go off. If you got the planet going that fast, hitting a star with it would be pointless since just about anything you hit with it would release enough energy that it would create a supernova sized explosion.

If your goal is to obliterate an enemy planet with a supernova sized blast, and if you could get an earth sized planet up to 99.9998% you wouldn't have to aim it at the star in the hope of setting off a chain reaction that would fuse all the hydrogen in the star. Just have it hit anything, a planet or a star, within a relatively short distance, say 3-4 light years, and that will release enough energy to make a supernova equivalent explosion and cook the alien planet. If your goal is to kill your enemy with an atom bomb, and you have an atom bomb, then just drop your bomb. Don't go for Pinky and the Brain level of complexity and drop it on another bomb hoping to set it off.