Exposing PseudoAstronomy

January 30, 2017

Podcast Episode 156: The Scientific Method— How We Get to What We Know


The Scientific
Method: Technique for finding
What’s true, and what’s not.

Another roughly half-hour episode based around the idea of how we know what we know … in other words, the Scientific Method. It’s an episode wrapped up in some underlying subtext — that’s all I’ll say about it. There are no real other segments in this episode.

Sorry Not Sorry Meme

Sorry Not Sorry Meme

Advertisement

August 1, 2013

Podcast Episode 82: How to Design a Hyperdimensional Physics Experiment


Hyper-D physics
Could be tested with a watch.
So, is Hoagland right?

This is a longer episode, over 40 minutes long. Hopefully I didn’t drone too much. The episode is based on a blog post from last May, going through how one could design an experiment IF you assume EVERY SINGLE BIT of what Richard Hoagland says is true about hyperdimensional physics is true. It’s meticulous. Which is why it’s long. And I show why, quite literally, Richard’s data as they are currently presented are meaningless.

And now, seriously, the next episode will be about claims made by David Sereda on the structure of … stuff. He wasn’t this episode because I had about 40 hrs of Coast to Coast audio to listen to, and I have about 16 hrs left. So, yeah, next time.

BTW, link to the new blog is: WND Watch.

April 3, 2013

Is the Scientific Method a Part of Science?


Introduction

You probably all remember it, and I can almost guarantee that you were all taught it if you went through any sort of standard American education system (with full recognition for my non-USAian readers). It’s called the Scientific Method.

That thing where you start with a question, form a hypothesis, do an experiment, see if it supports or refutes your hypothesis, iterate, etc. This thing:

Flow Chart showing the Scientific Method

Flow Chart showing the Scientific Method

The question is, does anyone outside of Middle and High School science class actually use it?

A Science Fair Question

I recently judged a middle and high school science fair here in Boulder, CO (USA). The difference in what you see between the two, at least at this science fair, is dramatic: High schoolers are doing undergraduate-level (college) work and often-times novel research while middle schoolers are doing things like, “Does recycled paper hold more weight than non-recycled?” High schoolers are presenting their work on colorful posters with data and graphs and ongoing research questions, while middle schoolers have a board labeled with “Hypothesis,” “Method,” “Data,” and “Conclusions.”

I was asked by a member of the public, after I had finished judging, why that was. He wanted to know why the high school students seemed to have forsaken the entire process and methodology of science, not having those steps clearly laid out.

My answer at the time – very spur-of-the-moment because he was stuttering and I had to catch a bus – was that it IS there in the high school work, but it was more implicit than explicit. That often in research, we have an idea of something and then go about gathering data for it and see what happens: It’s more of an exploration into what the data may show rather than setting out on some narrow path.

That was about a month ago, and I haven’t thought much more about it. But, the Wired article today made me think this would be a good topic for a blog post where I could wax philosophical a bit and see where my own thoughts lay.

Field-Specific?

A disclaimer up-front (in-middle?) is that I’m an astronomer (planetary geophysicist?). This might be field-specific. The Wired article even mentions astronomy in its list of obvious cases where the Scientific Method is usually not used:

Look at just about any astronomy “experiment”. Most of the cool things in astronomy are also discovered and then a model is created. So, the question comes second. How do you do a traditional experiment on star formation? I guess you could start with some hydrogen and let it go – right? Well, that might take a while.

That said, I’m sure that other fields have the same issues, and it’s really just a big grey area. What I’m going to talk about, that is. Some fields may be more towards one end of the greyscale than the other.

A Recent Paper I Co-Authored

I recently was a co-author on a paper entitled, “ Distribution of Early, Middle, and Late Noachian cratered surfaces in the Martian highlands: Implications for resurfacing events and processes.” The paper was probably the only professional paper I have ever been an author on that explicitly laid out Hypotheses, tests for those hypotheses, what the conclusion would be depending on the results, then the Data, then the Conclusions. And it was a really good way to write THAT paper. But not necessarily other papers.

A Recent Paper I Wrote

I had a paper that was recently accepted (too recently to supply a link). The paper was about estimating and modeling the ages of the largest craters on Mars. There was an Introduction, Methods, Data, and Conclusions. There was no Hypothesis. It was effectively a, “Here is something we can explore with this database, let’s do it and put these numbers out there and then OTHER people may be able to do something with those numbers (or we can) in future work.” There really was no hypothesis to investigate. Trying to make one up to suit the Scientific Method would have been contrived.

This is also something the Wired article mentions:

… often the results of a scientific study are often presented in the format of the scientific method (even though it might not have been carried out in that way). This makes it seem like just about all research in science follows the scientific method.

This is especially the case in medical journals, but not necessarily elsewhere.

Change the “Scientific Method?”

The Wired article offers this as the “new” method:

New Scientific Method (via Wired)

Here’s the accompanying justification:

There are a lot of key elements, but I think I could boil it down to this: make models of stuff. Really, that is what we do in science. We try to make equations or conceptual ideas or computer programs that can agree with real life and predict future events in real life. That is science.

I will preface this next part by saying I am NOT up-to-date on the latest pedagogy of teaching and I am NOT trained in teaching methods (other than 50+ hours of Graduate Teacher Program certification during grad school plus teaching several classes, including two as instructor of record).

That in mind, I think that this is a good idea in later years of grade school education. In the early years, I think that the methodology of the Scientific Method helps get across the basic idea and concepts of how science works, while later on you can get to how it practically works.

Let me explain with an example: In third grade, I was taught about the planets in the solar system plus the sun, plus there are asteroids, plus there are random comets. In eighth grade, I was taught a bit more astronomy and the solar system was a bit messier, but still we had those nine planets (this was pre-2006) and the sun and comets and asteroids plus moons and rings.

Then you get into undergrad and grad school, and you learn about streaming particles coming from the sun, that we can be thought of as being in the sun’s outer-most atmosphere. You get taught about magnetic fields and plasmas. Zodiacal light. The Kuiper Belt, Oort Cloud, asteroid resonances, water is everywhere and not just on Earth, and all sorts of other complications that get into how things really work.

To me, that’s how I think the scientific method should be taught. You start with the rigid formality early on, and I think that’s important because at that level you are really duplicating things that are already well known (e.g. Hypothesis: A ping pong ball will fall at the same rate as a bowling ball) and you can follow that straight-forward methodology of designing an experiment, collecting data, and confirming or rejecting the hypothesis. Let’s put it bluntly: You don’t do cutting-edge science in middle school.

In high school — in a high school with good science education — you actually do start to learn more about the details of different ideas and concepts and solid answers are no longer necessarily known. You want to find out, so you might design an experiment after seeing something weird, and then gather data to try to figure out what’s going on.

That’s how science usually works in the real world, and I think it’s a natural progression from the basic process, and I still think that basic process is implicit, if not explicit, in how science is usually done.

I just got back from a major science conference two weeks ago, and I sat through several dozen talks and viewed several hundred poster presentations. I honestly can’t remember a single one that was designed like a middle school science fair with those key steps from the Scientific Method.

Of course, another aspect is that if we get rid of it, we can’t make comics like this that show how it’s “really” done (sorry, I forget where I found this):

How the Scientific Method Really Works

How the Scientific Method Really Works
(click to embiggen)

Final Thoughts

That said, this has been a ~1400-word essay on what I think about this subject. I don’t expect much to change in the near future, especially since – as the Wired article points out – this is firmly entrenched in the textbooks and in Middle School Science Fair How-To guides.

But, I’m curious as to what you think. Do you think the Scientific Method is useful, useless, or somewhere in-between? Do you think it should be taught and/or used in schools? Do you think it should be used in science fairs? Do you think professional scientists should use it more explicitly more often?

March 26, 2013

Why I Do What I Do


Introduction

First post back from the Lunar and Planetary Science Conference, probably the second-largest annual gathering of planetary scientists in the world, and largest of those with a non-Earth focus (December AGU being more terrestrial geology). Whilst I was away, The Star Spot podcast posted an interview I did with them a month or so ago. It focuses mainly on different ideas about Planet X, about which I’ve both written and podcasted extensively.

At the end of the interview, I was asked, effectively, why I do what I do. I admit I hadn’t had much sleep before the interview, and I didn’t exactly have my A-game on. And so I may have come off as being somewhat more self-centered than normal. I have been asked this a few times before, like last year when going back-and forth with Mike Bara about that whole lunar ziggurat thing.

So, here’s the self-reflective but hopefully not as self-centered post. And the announcement of me being interviewed.

Being a Better Scientist

Let’s get this one out of the way because it’s what I mostly answered with when interviewed. One of the things needed to be a good scientist is the ability to ask good questions (let’s not get started on the, “There’s no such thing as a bad question!” because there really are). You have to be able to ask those questions and then investigate them. You have to have a high threshold for evidence. In my opinion, a good scientist needs to set a high threshold for the acceptance of new conclusions and needs to think about what may be mitigating factors.

What I mean by this is that you have to be skeptical. At least one commentator to my blog likes to claim that being skeptical is the antithesis of being a good scientist. That particular person couldn’t be more wrong. While Mike Bara has definitely flung more mud at me, the harshest substantive critiques of what I’ve written have always come from reviewers of papers I’ve written.

That’s what we do: When we sit down to review a paper that describes someone’s data and conclusions, we question everything. Does their data make sense in light of what’s been done before? Do they reference what’s been done before? Does their data description match their diagrams? Does the way in which the data were gathered make sense? Are their conclusions supported by the data? Are they reaching in their conclusions beyond what they have evidence for?

And those are just the big-picture questions. Most reviewers will also bluntly tell you that your grammar is bad, that the paper is poorly written, the figures are illegible, and so-on. I once had a reviewer say that my use of a three-word term once in a 10,000-word paper made everyone in the field look stupid.

This bit of a digression gets back to my main point: Scientists are skeptical, whether they self-identify with that term or not. If you cannot learn how to support your conclusions, if you can’t think of holes others might poke in your arguments and pre-emptively fill those holes, and if you can’t deal with people picking apart your work, you’re not going to make it in science.

Every little claim that I look into, every argument by a young-Earth creationist or UFOlogist that I pick apart, helps me hone my own skills in sorting through evidence and figuring out how to back up my own claims better.

Public Outreach

Yes, to you the public, who are not scientists, it is important to convey good science and to NOT convey and anti-convey (is that a term?) bad science. Not just for the broader utopian goals of a more intellectual society that’s better informed, but let’s face it: It also comes down to money. Pretty much all astronomy-related science is supported by government grants. I should not have to compete with someone like Richard Hoagland for a grant to do research when his stuff is clearly pseudoscience. But, to someone who is uninformed and who doesn’t know the tools and methods and background of how science is done and what he’s claiming, Hoagland’s nonsense may seem just as valid as what I do.

Case in point is that the National Institutes of Health have their “Complimentary and Alternative Medicine” division/institute/thing that actually DOES dole out money for studies into things that have been shown by the normal rules of evidence to not help treat nor cure anything. Real doctors and medical researchers have to compete against chiropractors and homeopathists for a dwindling pool of federal funds. And that’s sad.

I hope that by doing what I do, I can help people realize what science is, what good science is, and how to tell it from bad science.

Applicability to Every-Day Life: Critical Thinking

What this really teaches is critical thinking. Let’s say that you didn’t believe me that Planet X wasn’t going to cause a pole shift on December 21, 2012. I went through numerous posts on it and I got many people writing in the comments that we were all going to die. It’s late March 2013, so clearly they were wrong.

But, clearly they at least read some of what I wrote. It’s not always the conclusion that matters. But, what always matters is the process. The process that I try to go through in my blog and podcast when dissecting claims really boils down to critical thinking. No, not thinking critically (as in badly) about something, but thinking about it in detail and analyzing it in all ways possible.

That method of going through a claim in agonizing detail, showing what it would have to be in order to be correct, showing what it would mean for completely unrelated fields and applications (like, if magnetic therapy bracelets worked, you would explode when you go into an MRI), is – more than most other things – what I hope people get from the work I do here.

You probably aren’t going to come up against someone who’s going to make you decide between whether Billy Meier’s dinosaur photos are of real dinosaurs or of a childrens’ book and depending on your answer you stand to lose $1M or something like that. But, let’s say you’re going to invest money in a high-risk venture. You’ll be thrown a bunch of marketing hype. If you have the critical thinking tools and know where to look for the background knowledge, you could save yourself from quite a bit of financial loss. Perpetual motion scams companies do this all the time, trying to bilk rich people who don’t know any better out of their ¢a$h.

Final Thoughts

Skepticism, to me, is a process. It’s not a conclusion, it’s starting point and a process. I use it in my every-day work, and the more I practice it, the better (hopefully) I get.

I also happen to be in a position where I know more than the average person about a narrow topic range. My hope is that by showing where people go wrong in their thinking, I can help others avoid mistakes. People often learn better by understanding how they got the wrong answer than being told the right answer. That’s the goal here: Understanding the critical thinking process to be better equipped to deal with things that might not be so obvious in the future.

December 9, 2011

Skeptiko Host Alex Tsakiris on Monster Talk / Skepticality, and More on How to Spot Pseudoscience


Introduction

A few weeks ago, I learned that the popular Monster Talk podcast would be interviewing Skeptiko podcast host, Alex Tsakiris. They ended up later posting it instead on their Skepticality podcast feed, and the interview also was episode 153 of Skeptiko; it came out about two weeks ago. The interviewers from Monster Talk are Blake Smith, Ben Radford, and Karen Stollznow (the last of whom I have the pleasure of knowing). Got all that?

If the name Alex Tsakiris sounds familiar but you can’t quite place it and you’re a reader of this blog, you probably recognize it from the two previous posts I’ve written about him on this blog. The first was on the purpose of peer-review in science because Alex (among others) were talking about how peer-review was a flawed process and also that you should release results early without having a study completed.

Fourteen months later, I wrote another post on Alex, this one being rather lengthy: “Skeptiko Host Alex Tsakiris: On the Non-Scientifically Trained Trying to Do/Understand Science.” The post garnered a lot of comments (and I’ll point out that Alex posted in the comments and then never followed-up with me when he said he would … something he accuses skeptics of not doing), and I think it’s one of my best posts, or at least in the top 10% of the ~200 I’ve written so far.

This post should be shorter than that 2554-word one*, despite me being already in the fourth paragraph and still in the Introduction. This post is further commenting on not the actual substance of Alex Tsakiris’ claims, but rather on the style and format and what those reveal about fundamental differences between real scientists and pseudoscientists. I’m going to number the sections with the points I want to make. Note that all timestamps below refer to the Skeptiko version.

*After writing it, it’s come out to 3437 words. So much for the idea it’d be shorter.

Point 1: Establishing a Phenomenon Before Studying It

About 8 minutes into the episode, Karen talks with Alex about psychics, and Alex responds, “If you’re just going to go out and say, as a skeptic, ‘I’m just interested in going and debunking a psychic at a skeptic [sic] fair,’ I’m gonna say, ‘Okay, but is that really what you’re all about?’ Don’t you want to know the underlying scientific question?”

Alex raises an interesting point that, at first glance, seems to make perfect sense. Why belittle and debunk the crazies out there when you could spend your valuable time instead investigating the real phenomenon going on?

The problem with this statement – and with psi in general – is that it is not an established phenomenon that actually happens. Psi is still in the phase where it has yet to be conclusively shown to exist under strictly controlled situations, and it has yet to be shown to be reliable in its predictions/tenants. By this, I mean that psi has yet to be shown to be repeatable by many independent labs and shown to be statistically robust in its findings. I would note the obvious that if it had been shown to be any of these, then it would no longer be psi/alternative, it would be mainstream.

Hence, what the vast majority of skeptics are doing is going out and looking at the very basic question of does the phenomenon exist in the first place? If it were shown to exist, then we should spend our time studying it. Until then, no, we should not waste time trying to figure out how it happens. This really applies to pretty much everything, including UFO cases. In that situation, one has to establish the validity by exploring the claims before one looks at the implications, just like with alleged psychics.

A really simple if contrived example is the following: Say I want to study life on Io, a moon of Jupiter. I propose a $750 million mission that will study the life there with cameras, voice recording, chemical sensors, the works. I would propose to hire linguists to try to figure out what the beings on Io are saying to the probe, and I’d propose to hire biologists to study how they could survive on such a volcanic world. NASA rejects my proposal. Why? Because no one’s shown that life actually exists there yet, so why should they spend the time and money to study something they don’t know is actually there? And, not only that, but Io is so close to Jupiter that it’s bathed in a huge amount of radiation, and it is so volcanically active that it completely resurfaces itself every 50 years, making even the likelihood of life existing there very slim.

Point 2: Appeal to Quantum Mechanics

I’ll admit, I have a visceral reaction whenever I hear a lay person bring up quantum mechanics as evidence for any phenomenon not specifically related to very precisely defined physics. At about 12.5 minutes into the episode, Alex states quite adamantly that materialism (the idea that everything can be explained through material things as opposed to an etherial consciousness being needed) “is undermined by a whole bunch of science starting with quantum mechanics back a hundred years ago … .”

It’s really simply basically practically and all other -ly things untrue. Alex does not understand quantum mechanics. Almost no lay person understands quantum mechanics. The vast majority of scientists don’t understand quantum mechanics. Most physicists don’t understand quantum mechanics, but at least they know to what things quantum mechanics applies. Alex (or anyone) making a broad, sweeping claim such as he did is revealing more their ignorance of science than anything else.

Unless I’m mistaken and he has a degree in physics and would like to show me the math that shows how quantum mechanics proves materialism is wrong. Alex, if you read this, I’d be more than happy to look at your math.

You will need to show where quantum mechanics shows that consciousness – human thoughts – affect mater at the macroscopic level. Or, if you would like to redefine your terms of “consciousness” and “materialism,” then I will reevaluate this statement.

(For more on quantum mechanics and pseudoscience, I recommend reading my post, “Please, Don’t Appeal to Quantum Mechanics to Propagate Your Pseudoscience.”)

Point 3: Appeal to Individual Researchers’ Results Is a Fallacy

A habit of Alex is to relate the results of individual researchers who found the same psi phenomenon many different times in many different locations (as he does just after talking about quantum mechanics, or about 45 minutes into the episode where they all discuss this, or throughout the psychic detective stuff such as at 1:30:30 into the episode). Since I’ve talked about it at length before, I won’t here. Succinctly, this is an argument from authority, plain and simple. What an individual finds is meaningless as far as general scientific acceptance goes. Independent people must be able to replicate the results for it to be established as a phenomenon. The half dozen people that Alex constantly points to does not trump the hundreds of people who have found null results and the vast amount of theory that says it can’t happen (for more on that, see Point 6).

For more on this, I recommend reading my post on “Logical Fallacies: Argument from Authority versus the Scientific Consensus” where I think I talk about this issue quite eloquently.

It’s also relevant here to point out that a researcher may have completely 100% valid and real data, but that two different people could reach very different conclusions. Effectively, the point here, which is quite subtle, is that conclusions are not data. This comes up quite dramatically in this episode about 22.3 minutes in when discussing the “dogs that know” experiment; in fact, my very point is emphasized by Ben Radford at 24 min 05 sec into the episode. For more on this sub-point, I recommend reading my post from last year‘s Point 1.

Point 4: Investigations Relying on Specific Eyewitness Memories Decades After the Fact = Bad

The discussion here starts about 36 minutes into the episode, stops, and resumes briefly about 50 minutes in, and then they go fully into it at 1 hour 13 minutes in*. For background, there is a long history of Alex looking into alleged psychic detectives, and at one point he was interviewing Ben Radford and they agreed to jointly investigate Alex’s best case of this kind of work and then to hash out their findings on his show. This goes back to 2008 (episode 50), but it really came to a head with episode 69 in mid-2009 where they discussed their findings.

Probably not surprisingly, Alex and Ben disagreed on the findings and what the implications were for psychic detectives (Nancy Weber in this case). If you are genuinely interested in this material, I recommend listening to the episodes because there is much more detail in there than I care to discuss in this quickly lengthening post. The basic problems, though, were really two-fold — Ben and Alex were relying on police detectives remembering specific phrases used by the alleged psychic from a case almost 30 years old (from 1982), and they disagreed on what level of detail counted as a “hit” or “miss.”

For example, when Ben talked with the detectives, they had said the psychic told them the guy was “Eastern European” whereas they had separately told Alex that she had told them the guy was “Polish.” Alex counted it as a hit, Ben a miss. I count it as a “who knows?” Another specific one they talk about in this interview is “The South” versus “Florida” with the same different conclusions from each.

To these points, both scientists and skeptics (and hopefully all scientists are appropriately skeptical, as well) I think can learn a lot when looking into this type of material.

First, I personally think that this was a foolish endeavor from the get-go to do with an old case. Effectively every disagreement Ben and Alex had was over the specific phrasing which, unless every single thing the alleged psychic says is recorded, you are never going to know for sure what she said. Human memory simply is not that reliable. That is a known fact and has been for many years (sources 1 and 2, just to name a couple). Ergo, I think the only proper way to investigate this kind of phenomenon where you have disagreements between skeptics and other people is to wait for a new case and then document every single part of it.

Second, one needs to determine a priori what will count as a hit or miss (“hit” being a correct prediction, “miss” being wrong). In the above example, if they had agreed early on that Nancy Weber only needed to get the region of the planet correct, then it would be a hit. If she needed to get the country (first example) or state (second example) correct, it would be a miss under what the detectives told Ben. This latter point is the one that is more relevant in scientific endeavors, as well. Usually this is accomplished through detailed statistics in objective tests, but in qualitative analyses (more relevant in things like psychiatric studies), you have to decide before you give the test what kinds of answers will be counted as what, and then you have to stick with that.

It should be noted that hits vs. misses was not the actual crux of the disagreement, however. It was the level of specificity the psychic claimed (“Polish”) versus what the detectives told Ben they remembered (“Eastern European”), and then the broader picture to how well that information will help solve a case.

I actually encounter the same thing when grading essays. This is one reason why teachers in science classes like multiple-choice questions more than essays (besides the time it takes to grade): It’s much more quantitative to know the answer is (A) as opposed to parsing through an essay looking for a general understanding of the question being asked.

*I’ll warn you that this goes on for about a half hour and it’s somewhat difficult to listen to with all the shouting going on. If you’re scientifically/skeptically minded, listening to this is going to make you want to smack Alex. If you’re psi/alternative minded, listening to this is going to make you want to smack Ben. This is why I try not to get into the specifics of the exact case but rather point out the process and where the process is going wrong here.

Point 5: Confusing Different Causes for a Single Effect

About 41 minutes into the episode and then for several minutes on, the conversation turned to the idea that psychics help with the grieving process. The reaction from me (and then the hosts) was pretty much, “Duh!” As Blake points out just before the 43 minute mark, “How many times did the [psychic] say, ‘Oh gee! That person’s in Hell!'” Thus, probably, not helping the grieving process.

The conversation steered along the lines of the three hosts of Monster Talk trying to point out that yes, the effect of the alleged psychic talking with the grieving person is that the grieving person felt better. But was the cause (a) because the person was actually psychic, or (b) because the person was telling the grieving people what they wanted to hear that their loved one was happy and still with them and they would join them when they died?

Alex obviously is of the former opinion (after pulling out yet another argument from authority that I talked about in Point 3 above). The others are of the latter. But the point I want to pull from this is something that all scientists must take into account: If they see an effect, there could be causes other than or in addition to their own preferred explanation. That’s really what this case that they talk about boils down to.

For example, we want to know how the moon formed. There are many different hypotheses out there including it formed with Earth, it was flung off Earth, it was captured, it was burped out, or a Mars-sized orbit crashed into Earth and threw off material that coalesced into the moon. I may “believe” in the first. Another person may in the last. We both see the same effect (the moon exists and has various properties), but how we got that effect probably only had one cause. Which one is more likely is the question.

Point 6: It’s Up to the Claimant to Provide the Evidence

I know I’ve discussed this before, but I can’t seem to find the post. Anyway, this came up just before the 52 minute mark in the episode, that Alex frequently states it’s up to the debunkers to debunk something, not for the claimant to prove it. (To be fair, in this particular interview, Alex kinda says he never said that at first, he only says it when it’s a paradigm shift kinda thing that’s already shifted … which it so has not in this case. But then he does say it …)

Blake: I think most skeptical people believe that whenever you’re making a claim that you have the burden of proof every time. And it never shifts …

Alex:… And they’re wrong because that’s not how science works. Science works by continually asking hard, tough questions and then trying to resolve those the best you can.

I’m really not sure where Alex gets this first sentence (the second sentence is correct, but it and the first are not mutually exclusive). It’s simply wrong. In no field is this a valid approach except possibly psi from Alex’s point of view. If you make a claim, you have to support it with evidence that will convince people. If I say I can fly, it shouldn’t be up to you to prove I can’t, it should be up to me to prove I can. It’s that simple. And Alex gets this wrong time after time.

This is further evidence (see Point 2 above) that Alex has no actual concept of science and how it works. And before you accuse me of ad hominems, I’m stating this in an objective way from the data — his own statements that have not been quote-mined (go listen to the episodes yourself if you don’t believe me).

But it continues:

Ben: So who does have the burden of proof?

Alex: Everybody has the burden of proof and that’s why we have scientific peer-reviewed journals, the hurdles out there that you have to overcome to establish what’cha know and prove it in the best way you can. It gets back to a topic we kinda beat to death on Skeptiko and that’s this idea that also hear from skeptic [sic], ‘Extraordinary claims require extraordinary proof.’ Well of course that’s complete nonsense when you really break it down because scientifically the whole reason we have science is to overcome these biases and prejudices that we know we have. So you can’t start by saying ‘Well, I know what’s extraordinary in terms of a claim, and I know what would be extraordinary in terms of a proof,’ well that’s counter to the idea of science. The idea of science is it’s a level playing field, everybody has to rise above it by doing good work and by publishing good data.

(Ben Radford corrects Alex on this point about 54.7 minutes into the episode; feel free to listen, but also know that the points he makes are not the ones I do below. Well, maybe a bit around 56 minutes.)

I know I’ve talked about this before, but not in these exact terms. What Alex is talking about – and getting wrong – without actually realizing it is how a hypothesis becomes a theory and the lengths one has to go to to overturn a theory. That’s what this nugget boils down to.

If you’re not familiar with the basic terminology of what a scientist means by a fact, hypothesis, theory, and law, I recommend reading one of my most popular posts that goes into this. The issue at hand is that it is effectively established theory that, say, people cannot psychically communicate with each other (yes, I know science can’t prove a negative and there’s no Theory of Anti-Psi, but go with me on this; it’s why I said “effectively”). Even if it’s not an exact theory, there are others that are supported by all the evidence that show this isn’t possible nor plausible.

Ergo, to overturn all those theories that together indicate psi can’t happen, you have to have enough convincing and unambiguous data to (a) establish your phenomenon and (b) explain ALL the other data that had backed up the previous theories and been interpreted to show psi is not real.

This is summarized as, “Extraordinary claims require extraordinary evidence.” That’s the phrase, not “proof,” which in itself shows yet again that Alex misses some fundamental tenants of science: You can never prove anything 100% in science, you can only continue to gather evidence to support it. “Proof” does not exist, just like “truth,” as far as science is concerned.

Final Thoughts

Well, this post ended up longer than I had initially planned, and it took several hours not the least of which is because I listened to the episode twice and it’s almost two hours long. I hope that through this I’ve been able to illustrate several points that you and everyone needs to watch out for when evaluating claims.

To quickly recap:

  1. You need to establish that a phenomenon exists before studying it.
  2. Don’t appeal to quantum mechanics unless you actually know what quantum mechanics is.
  3. A single or small group of researchers’ results are not convincing, no matter who they are.
  4. If you want to study something that supposedly happens every day, don’t choose an example that’s 30 years old.
  5. A single effect can have multiple or different causes, including one that you don’t like.
  6. The person making the claim has the burden of evidence … always.

In the end, I’ll admit that this was personally hard to listen to in parts. I took issue with Alex constantly refusing to admit certain things like the detectives saying one thing to him and another to Ben and saying Ben was lying about it and that he should say (what he didn’t say) to the detectives’ faces. That was just hard to listen to. Or Alex’s refusal to directly answer some questions in ways that would have made a politician proud. Another point that was hard to listen to but oh so sweet in the end was Alex claiming that Karen had invited him on but Karen said that Alex had invited himself on. Alex insisted that wasn’t true and said Karen was wrong and he had the transcript … and then a few seconds later the transcript was read and Alex clearly had invited himself onto their show.

But, those are my personal and more emotional observations after listening to this. Do those change what we can learn about the scientific process and where pseudoscientists go wrong? No. Alex Tsakiris continues to unwittingly provide excellent examples of how not to do science.

November 30, 2011

A Follow-Up on How Science Works versus Creationism


Introduction

This is a short follow-up to my last post, “Mistakes in Science Apparently Means Creationism Is True.” In that fine piece of blogging, I talked some about how science is a process where we continually revise our knowledge based upon new observations and discoveries. Contrasted with creationism.

It was therefore apropos that I ran across this article on Ars Technica, “ How a collapsing scientific hypothesis led to a lawsuit and arrest .”

Article

The article in question was written by John Timmer, a faculty at Cornell Medical College. He got his Ph.D. in Molecular and Cell Biology (like my dad!) from University of California, Berkeley (not like my dad). So I’d say he’s reasonably qualified – while avoiding an argument from authority – to write about this topic.

In his article, Dr. Timmer tells the story of a small group of researchers thought they found a retrovirus associated with prostate cancer, and they later even linked it to people with chronic fatigue syndrome. I’m about as qualified to talk about medicine as any other lay person (so not much), but I can gather that this would be pretty darn important. A retroviral link means (a) a good test to see in who this may develop, and (b) a possible cure if we could get rid of that retrovirus. Their work was published in one of the two leading journals in the world, Science.

Then problems developed. I don’t want to take too much away from Dr. Timmer’s article, which I highly recommend reading. But, suffice to say, other people investigated these claims and tried to verify them. Nothing less than the country’s blood supply was actually at stake if their findings bore out. Problem was that no one could replicate them. And the main researcher’s (Judy Mikovits) co-authors started to walk away. Mikovits didn’t, ended up being fired for insubordination when refusing to share cell cultures as required, and then arrested for stealing her lab notebooks and other things.

So, as the title sums, a collapsing hypothesis led to a lawsuit and arrest, but also a good moment to illustrate how science works, especially in contrast with creationism.

How Science Works

Readers of my blog will recognize that I’ve said this before, but it’s important to get across. So I’ll try to shorten it this time. The scientific process requires duplication of findings. It requires testing of claims. It requires questioning and critiquing others’ results. It requires revision.

All of these requirements are how and why the process of science is incremental and self-correcting. Mikovits’ work made it into one of the most prestigious scientific journals in the world. That does not mean everyone believed it nor that it meant it was “true.” Less than five years later, the paper has been retracted and the researcher has been pretty much disgraced in the scientific community and is facing significant legal issues due to misconduct and theft. The study was shown to be wrong. The scientific process is to thank for showing that.

(And now the obligatory “in contrast, creationism … .”) In contrast, creationism generally requires putting your fingers in your ears and shouting, “La-la-la, I can’t hear you!” when something contradicts their favored position. Or, they will accept the latest study whole-heartedly if it fits their paradigm, but not admit it was wrong if later retracted or shown to be wrong or misunderstood. I pointed to Earth’s magnetic field strength last time, this time I’ll choose comets and simply link to my blog (here, or here), or podcast (here).

Final Thoughts

Perhaps the worst part of the story in question is that a whole new subset of medical pseudoscience has cropped up because of Mikovits’ work. Before she came along, no people suffering from these really thought there was a retrovirus cause. Now some do, and “alternative” practitioners offer to test them for the non-existent retrovirus or offer antiretroviral agents as “cures.” Even though it’s now come out that the original study was simply wrong. But, unfortunately, that doesn’t change things once the idea is out there.

July 17, 2010

Should the Public Be Able to Choose What Science to Believe?


Introduction

This blog post is about a statement made by Dr. Caroline Crocker on the ID The Future podcast episode from July 12, 2010, entitled, “Setting the Record Straight with Caroline Crocker.”

Got that straight? This is NOT about the Intelligent Design movement, it is NOT about evolution versus creationism versus ID, it is NOT about the movie Expelled, nor is it about Caroline Crocker.

Setting Up the Question

In the podcast episode, Dr. Crocker made an off-hand remark (starting about 7 min 15 sec into the episode):

“I also believe that freedom, which is foundational in our society, requires people to have choices. And if people are not given options – that is they’re not told the whole scientific truth in as much as they can understand it and most people I find can understand if you just explain – then they don’t have any choice! And I think it’s very important that people are given complete explanations, and that’s actually one reason I set up the American Institute for Technology and Science Education, so that people would have an opportunity to hear scientific options and to have a choice.”

That’s a long paragraph, about 30 seconds of speech, but what it really boils down to is this: Dr. Crocker thinks (based upon my understanding of what she stated) that people should be told the entire body of science behind something (i.e., she obviously is talking about evolution, but it would extend to any science). Once they are told this, which she believes they can understand, then they should be allowed to make their own choice about what they want to believe.

Hence the title of this blog post: Should the public be able to choose what science to believe?

An Example

I have perhaps written the title in a confrontational manner, more-so than need-be. I’m not trying to set up a post where I say that scientists from on high should pass down edicts of what is Truth and those must be followed without question. What I am asking, rather, is if the lay, non-scientifically trained public are in a position where they can make an educated opinion on a technical subject after being explained the basics for a few minutes.

Let’s have an example, and since this is an astronomy blog, we’ll take an example from astronomy. Let’s take Earth’s moon and how it may have formed.

Decades ago, the original theory (yes, I’m using that word correctly) was Earth’s moon formed the same way Earth did, in Earth’s orbit, from the solar nebula. But that had problems with it (like it couldn’t explain the composition differences). The second theory was it got captured, as we think Mars’ moons were captured and many of the giant planets’ moons were captured asteroids. But that has problems because there’s no good way to get rid of the extra velocity. The third one, this time I’d classify as a hypothesis, was the “fission” idea where Earth was spinning really quickly and it basically spun off the moon out of the Pacific ocean. This, however, required a ridiculously high spin rate and didn’t take into account plate tectonics.

Finally now we have the fourth theory that is pretty well established and has been nick-named, “The Big Splash.” This is where a Mars-sized impactor hit Earth early on, nearly destroying Earth, but throwing up a debris cloud that formed the moon in Earth orbit. This explains almost all the characteristics we observe of the moon.

But last year another hypothesis was proposed, one that some people have termed, “The Big Burp” (yeah, astronomers are real creative … everything is the “Big” something). The idea here is that, deep inside Earth’s mantle, a buildup of radioactive material suddenly went critical and there was a spontaneous nuclear reaction, blowing out a chunk of Earth that formed the moon. Kinda similar to the fission idea, but a different mechanism for the moon’s ejection.

As anyone who reads my blog semi-regularly knows, I just finished teaching an introductory astronomy class for non-majors. This was a solar system class, and we discussed the formation of Earth’s moon in about a third of a class period. I briefly went through the historic ideas and the problems with them in order to show why we think the “Big Splash” is the best model. I didn’t go into the “Big Burp” at all because (a) it is a very new proposal, and (b) it was published in a low-review journal after being rejected from mainstream ones.

When discussing all these different formation models, I didn’t go very deep into them. I explained them in about as much detail as I did above, with basically a one-sentence description. Then I went over some of the pros and cons for each. And when we got to the Big Splash, I said that this is the one that happened, this is THE way the moon formed, and they all scribbled it down, stared blankly, were dozing on their desks, or trying to hide that they were txting on their cell phones.

If Dr. Crocker’s position is to be carried to this, and I believe whole-heartedly this is what she is arguing, then I did my students a disservice. I should have gone into equal detail for each proposal. I should have explained thoroughly the pros and cons for each. I should definitely have included the Big Burp. And when all was said and done, after spending 45 minutes going through these, I should have said, “Now you have the information, it is up to you to make up your own minds as to what happened and how the moon formed.”

That’s right. Without any of the theoretical backing, without an understanding for three-body dynamical systems (problem with Theory #2), without an understanding of chemistry and mineralogy (problem with #1, #3), without an understanding of basic Newtonian mechanics and material strength (problem with #3), or nuclear forces and the structure of Earth (problem with #5), after explaining to the students the basics of each I am supposed to let them make up their own minds.

My Thoughts

I think if you have much perceptive ability you can tell what I think the answer should be to my rhetorical question based upon my last two paragraphs. Scientists in any given field of study will reach conclusions about their field based upon an thorough understanding of the data, an understanding that pretty much can ONLY come with studying it for years and years. No research field exists in a vacuum (despite what some “amateur scientists” will claim), and you have to have a lot of background information from a broad base before you can actually understand a problem.

As a planetary scientist, I have a broad, 10-year background in physics, geology, and astronomy, and that background allows me to make an informed conclusion about the state of the science and which lunar formation proposal is the most likely to represent what really happened. If it were almost any other field, I wouldn’t even go into the historical ideas, I would just jump in and say, “The ‘Big Splash’ is how the moon formed” and then explain what that means (teaching astronomy is rather unique in the sciences because we do A LOT of history of the field). But, if we were to extend Dr. Crocker’s thoughts to a field other than evolution (which is obviously what she is talking about), then I would be infringing upon my students’ right to make up their own mind without my influencing their decision.

Okay, a Teensy Bit of Ridicule

I was trying to be fairly objective and ignore evolution etc. in this, but I think I really should at least mention the whole larger context for this and the obvious case to what Dr. Crocker wants this to apply. Dr. Crocker appears to be an avid advocate for the whole “Teach the Controversy” when it comes to teaching evolution. She thinks that students should be presented with evolutionary theory at the basic level that they already are, but then also taught the problems with it that are normally not talked about until you get to a graduate level of study. The reason for the normal delay in teaching the problems is that they are minor problems on the more fine layers of evolutionary theory. For example, we know that the large cake of evolution is perfectly fine and holds its own, it’s a question then of if there are ripples in the icing on top that can’t be smoothed away yet. Anyway … besides teaching evolution and its problems, the whole other side to “Teach the Controversy” is that there should also be an equivalent amount of time devoted to intelligent design and creationism since they also have something to say about how different species came about. And then the students should be able to decide themselves what to believe.

It would be the same as with my moon example: I explain each hypothesis and also throw in that on the third day God created the moon by magic (Genesis 1:16). And then let them decide, and on the test when I ask them, not count any response wrong.

I gotta say, I think that’s silly. And it’s irresponsible. And it does the students a disservice because it makes them think that all ideas are equal, when in fact they’re not. The reason the majority of scientists who study this think that the moon formed in the “Big Splash” is because it best explains the observational evidence without resorting to something supernatural/alien/whatever.

Final Thoughts

So, does it make sense that the public should have all sides explained to them equally, assumed they understand them and all the background, and then allowed to make up their own mind and have it be just as valid a conclusion as anyone else’s? I think when you actually look at the issue in this way, fully exploring the consequences of the proposal, then the answer is reasonably obvious, and it is a resounding, “No.”

But when simply phrased in a, “let’s give people options because that’s what a free society does,” it seems so deceptively simple. Until you follow through with what it actually would mean.

I think I’ll close with a statement my former officemate made that I have repeated several times on this blog: Science is not a democracy, it is a meritocracy. Only the best ideas survive because they become the most widely accepted because they convince people who know how to understand the idea through their ability to explain the observational evidence.

May 29, 2010

Skeptiko Host Alex Tsakiris: On the Non-Scientifically Trained Trying to Do/Understand Science


Preamble

First, let me give one announcement for folks who may read this blog regularly (hi Karl!). This may be my last post for about a month or so. As you may remember from my last post, I will be teaching all next month, June 1 through July 2, and the class is every day for 95 minutes. I have no idea how much free time I may have to do a blog post, and I have some other projects I need to finish up before the end of the month (I’m also a photographer and I had a bride finally get back to me about photos she wants finished).

Introduction

I have posted once before about Skeptiko podcast host Alex Tsakiris in my post about The Importance of Peer-Review in Science. The purpose of that post was to primarily show that peer review is an important part of the scientific process, a claim contrary to what the host of said podcast had claimed.

Now for the official disclaimer on this post: I do not know if Alex is a trained scientist. Based on what he has stated on his podcast, my conclusion is that he is not. What I have read of his background (something like “successful software entrepreneur” or around those lines) supports that conclusion. However, I don’t want to be called out for libel just in case and so that is my disclaimer.

Also, I am not using this post to say whether I think near-death experiences are a materialistic phenomenon or point to a mind-brain duality (mind/consciousness can exist separately from brain). That is NOT the point of this post and I am unqualified to speak with any authority on the subject (something I think Alex needs to admit more often).

Anyway, I just completed listening to the rather long Skeptiko episode #105 on near-death experiences with Skeptics’ Guide to the Universe host Steven Novella Dr. Steven Novella (see Points 2 and 3 below for that “Dr.” point). I want to use that episode to make a few points about how science is done that an (apparently) non-scientifically-trained person will miss. This post is not meant to be a dig/diss against so-called “citizen science,” rather the pitfalls of which non-scientists should be aware when trying to investigate pretty much ANY kind of science.

Point 1: Conclusions Are Not Data

Many times during the episode’s main interview and after the interview in the “follow-up,” Alex would talk about a paper’s conclusions. “The researchers said …” was a frequent refrain, or “In the paper’s conclusions …” or even “The conclusions in the Abstract …” I may be remembering incorrectly, perhaps someone may point that out, but I do not recall any case where Alex instead stated, “The data in this paper objectively show [this], therefore we can conclude [that].”

This is a subtle difference. Those of you who may not be scientifically trained (or listened to Steve’s interview on the episode) may not notice that there is an important (though subtle) difference there. The difference is that the data are what scientists use to make their conclusion. A conclusion may be wrong. It may be right. It may be partially wrong and partially right (as shown later on with more studies … more data). Hopefully, if there was not academic fraud, intellectual dishonesty, nor faulty workmanship (data gathering methods), the actual data itself will NEVER be wrong, just the conclusions from it. In almost any paper — at least in the fields with which I am familiar — the quick one-line conclusions may be what people take away and remember, but it’s the actual data that will outlive that paper and that other researchers will look at when trying to replicate, use in a graduate classroom, or argue against.

I will provide two examples here, both from my own research. The first is from a paper that I just submitted on using small, 10s to 100s meter-sized craters on Mars to determine the chronology of the last episodes of volcanism on the planet. In doing the work, there were only one or two people who had studied it previously, and so they were obviously talked about in my own paper. Many times I reached the same conclusion as they in terms of ages of some of the volcanos, but several times I did not. In those cases, I went back to their data to try to figure out where/why we disagreed. It wasn’t enough just to say, “I got an age of x, she got an age of y, we disagree.” I had to look through and figure out why, and whether we had the same data results and if so why our interpretations differed, or if our actual data differed.

The second example that’s a little better than the first is with a paper I wrote back in 2008 and was finally published in a special edition of the journal Icarus in April 2010 (one of the two main planetary science journals). The paper was on simulations I did of Saturn’s rings in an attempt to determine the minimum mass of the rings (which is not known). My conclusion is that the minimum mass is about 2x the mass inferred from the old Voyager data. That conclusion is what will be used in classrooms, I have already seen used in other peoples’ presentations, and what I say at conferences. However, people who do research on the rings have my paper open to the data sections, and I emphasize the “s” because in the paper, the data sections (plural) span about 1/2 the paper, the methods section spans about 1/3, and the conclusions are closer to 1/6. When I was doing the simulations, I worked from the data sections of previous papers. It’s the data that matters when looking at these things, NOT an individual (set of) author(s).

Finally for this point, I will acknowledge that Alex often repeats something along the lines of, “I just want to go where the data takes us.” However, saying that and then reading a paper’s conclusions are not mutually compatible. Steve pointed that out at least twice during the interview. At one point in the middle, he exclaimed (paraphrasing), “Alex, I don’t care what the authors conclude in that study! I’m looking at their data and I don’t think the data supports their conclusions.”

Point 2: Argument from Authority Is Not Scientific Consensus

In my series that I got about half-way through at the end of last year on logical fallacies, I specifically avoided doing Argument from Authority because I needed to spend more time on it versus the Scientific Consensus. I still intend to do a post on that, but until then, this is the basic run-down: Argument from Authority is the logical fallacy whereby someone effectively states, “Dr. [so-and-so], who has a Ph.D. in this and is well-credentialed and knows what they’re doing, says [this], therefore it’s true/real.”

If any of my readers have listened to Skeptiko, you are very likely familiar with this argument … Alex uses it in practically EVERY episode MULTIPLE times. He will often present someone’s argument as being from a “well-credentialed scientist” or from someone who “knows what they’re doing.” This bugs the — well, this is a PG blog so I’ll just say it bugs me to no end. ALL BECAUSE SOMEONE HAS A PH.D. DOES NOT MEAN THEY KNOW WHAT THEY’RE DOING. ALL BECAUSE SOMEONE HAS DONE RESEARCH AND/OR PUBLISHED DATA DOES NOT MEAN THEIR CONCLUSIONS ARE CORRECT OR THAT THEY GATHERED THEIR DATA CORRECTLY.

Okay, sorry for going all CAPS on you, but that really cannot be said enough. And Alex seems to simply, plainly, and obviously not understand that. It is clear if you listen to practically any episode of his podcast, especially during any of the “psychic dogs” episodes or “global consciousness” ones. It was also used several times in #105, including one where he explicitly stated that a person was well-credentialed and therefore knows what they’re doing.

Now, very briefly, a single argument from someone does not a scientific consensus make. I think that’s an obvious point, and Steve made it several times during the interview that there is no consensus on the issue and individual arguments from authority are just that — arguments from authority and you need to look at their data and methods before deciding for yourself whether you objectively agree with their conclusions.

Edited to Add: I have since written a lengthy post on the argument from authority versus scientific consensus that I highly recommend people read.

Point 3: Going to Amazon, Searching for Books, to Find Interview Guests

Okay, I’ll admit this has little to do with the scientific process on its face, but it illustrates two points. First, that Alex doesn’t seem to understand the purpose/point of scientific literature, and second that fast-tracking the literature and doing science by popular press is one of the worst ways and a way that strikes many “real” scientists as very disingenuous. I’ll explain …

First, I will again reference my post, “The Importance of Peer-Review in Science.” Fairly self-explanatory on the title, and I will now assume that you’re familiar with its arguments. In fact, I just re-read it (and I have since had my own issues fighting with a reviewer on a paper before the journal editor finally just said “enough” and took my side).

To set the stage, Alex claims in the episode:

“Again, my methodology, just so you don’t think I’m stacking the deck, is really simple. I just go to Amazon and I search for anesthesia books and I just start emailing folks until one of them responds.”

As I explained, peer-reviewed papers are picked apart by people who study the same thing as you do and are familiar with other work in the area. A book is not. A book is read by the publishing company’s editor(s) – unless it’s self-published in which case it’s not even read by someone else – and then it’s printed. There is generally absolutely zero peer-review for books, and so Alex going to Amazon.com to find someone who’s “written” on the subject of near-death experiences will not get an accurate sampling. It will get a sampling of people who believe that near-death experiences show mind-brain duality because …

Published books on a fringe “science” topic are done by the people who generally have been wholeheartedly rejected by the scientific community for their methods, their data-gathering techniques, and/or their conclusions not being supported by the data. But they continue to believe (yes, I use the word “believe” here for a reason) that their interpretations/methods/etc. are correct and hence instead of learning from the peer-review process and tightening their methods, trying to bring in other results, and looking at their data in light of everything else that’s been done, they publish a book that simply bypasses the last few steps of the scientific process.

Not to bring in politics, but from a strictly objective point, this is what George W. Bush did with the US’s “missile defense” system. Test after test failed and showed it didn’t work. Rather than going back and trying to fix things and test again, he just decided to build the thing and stop testing.

Point 4: Confusing a Class of Outcomes with a Single Cause

This was more my interpretation of what Alex did in the interview and what Steve pointed out at many times, and it is less generalizable to the scientific process, but it does apply nonetheless.

Say, in cooking, you serve up a pizza. The pizza is the “class of experiences” here that is the same as a class of things that make up the near-death experience (NDE). The toppings of your pizza are the individual experiences of the NDE. Pizzas will usually have cheese, NDEs will usually have a sense of well-being. Pizzas may more rarely have onions, NDEs may more rarely have a white light tunnel associated with them. You get the idea.

Now, from the impression I got, Alex seemed to claim throughout the episode that there was only one way to make a pizza — have an NDE. Steve argued that there were many different ways to make a pizza, and that all those different techniques will in general lead to something that looks like a pizza.

Point 5: Steve’s a Neurologist, Alex Is Not

I need to say before I explain this point that I am NOT trying to say that you need a Ph.D. in the topic to do real science. I do not in ANY WAY mean to imply that science is an elitist thing where only people “in the club” can participate.

That said, I really am amazed by Alex arguing against people who actually have studied the subject for decades. If you are a non-scientist, or even if you are a scientist but have not studied the topic at-hand (like, gee, me talking about near-death experiences while I’m an astrophysicist/geophysicist), then you need to make darn sure that you know what the heck you’re talking about. And you need to be humble enough to, when the actual person who’s studied this says you’ve made a mistake, take that very seriously and look again at what you thought was going on. The probability that you have made a mistake or misunderstood something as opposed to the expert in the field is fairly high.

Again, this is not my attempt to backtrack and myself commit an argument from authority fallacy. However, there is a difference from making an argument from authority fallaciously versus listening to what an authority on the subject says and taking it into account and re-examining your conclusions. It seriously amazes me how much Alex argued against Steve as if Alex were an expert in neurology. It caused him to simply miss many of the points and arguments Steve was making, as evidenced by Steve saying something and then needing to repeat his argument 20 minutes later because Alex had ignored it because Alex has been buoyed by his interviews with previous pro-duality guests.

Final Thoughts

As I’ve stated, the purpose of this post is not to discuss whether NDEs show a mind-brain duality or if it has a purely materialistic explanation. The purpose is to point out that the methods Alex uses are fallacious, and while I know that people have pointed it out to him before, it seems that it has made very little impact upon the way he argues. I believe this is in part due to his need for confirmation bias – he definitely has made his mind up on whether or not psi-type phenomena exists. But I also am fairly sure that it’s because Alex lacks any kind of formal training in science. Because of that, he makes these kinds of mistakes – at least originally – without knowing any better. Now, since it’s been pointed out to him, I think it’s intellectually dishonest to keep making them, but again that’s beyond the purpose of this post.

So, to wrap this all up, non-scientists take heed! Avoid making these kinds of mistakes when you try to do or to understand science yourself. Make sure that you look at the data, not just the conclusions from a paper. Don’t make arguments from authority. Remember that popular books are not the same as peer-reviewed literature. And keep in mind there can be (a) multiple explanations and (b) multiple ways to reach an end point.

July 31, 2009

What Is Science, Its Purpose, and Its Method?


Introduction

Following up on my post “Terminology: What Scientists Mean by “Fact,” “Hypothesis,” “Theory,” and “Law”,” as well as a recent planetarium lecture I gave on young-Earth creationism in astronomy, I thought it would be a valuable post to go over specifically what the purpose of science actually is, and how science goes about, well, science.

I need to make three things very clear up-front: First, I am not a philosopher. I have not taken any philosophy classes, nor have I taken a philosophy of science class (though I think I probably should).

Second, even though “science” is an inactive noun – where I use the word “inactive” to mean that it is a process and a mode of thinking – I will be using it throughout this post as an “active” noun, personifying it to actually “do” things. This is how it’s used in popular culture, and I see no real reason to take efforts to not go with the colloquial use in this posting.

Third, this post is going to serve a dual purpose by contrasting the scientific method with the creationist “method” in order to show how science differs in key, important ways.

Dictionary Definitions of Terms

The way the dictionary that Apple kindly provides on their computers defines “science” as: “The intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment.” There are three sub-definitions, but that main one emphasizes that “science” is an activity, a study, and one that looks for natural explanations.

My only qualm with this definition is that I would add to it not only what it does or how it operates, but its purpose, as well: “The purpose of science is that once it has provided an explanation for the physical and natural world, it allows one to use that explanation to make predictions.” I know that when I stand on one foot, if I don’t shift my weight to that one foot, I will likely fall if I do not support myself. That is because I have repeated observations that tell me this. Without that predictive power that in the future I will fall if I don’t shift my weight, then all those previous observations are fairly worthless.

In this section, I also want to define “dogma.” Using the dictionary again: “A principle or set of principles laid down by an authority as incontrovertibly true.”

Now, hopefully I’m stating the obvious, but “dogma” and “science” are not equivalent. In fact, I know that I’m not stating the obvious because there are many, many, many people out there who believe that science simply leads to dogmatic facts/ideas/theories, etc. This is not true. And in the rest of this post I will show you why.

A Look at the Creationist “Science” Method

Before I say anything else, I want to emphasize that this is not a straw man argument, an exaggeration, or anything else that may lead to you thinking this is not true. This section is really how many – if not most or all – biblical literalists view science, and this is how they decide what science to incorporate into their worldview.

Ken Ham, the CEO of the “Answers in Genesis” (a young-Earth creationist think-tank in the US, now separate from the Australian group by the same name), has explicitly stated that one must start with the Bible, while others at AiG have stated that even logic and science itself flows from the Bible, for without it, you couldn’t even have the tools that science uses.

Now that that’s out of the way, let’s look at a flow chart:

Flow Cart Showing the Scientific Method

Flow Chart Showing Faith-Based 'Science'

The above flow chart shows the basic, fundamental process that most biblical literalists use to vet science. They may get an idea, or hear of something. Let’s use a young-Earth creationist mainstay, Earth’s magnetic field. Data shows that Earth’s field has gone through reversals in polarity at many points in the past. The data is clearly out there for anyone to examine, and it is unambiguous that crustal rocks record a flip-flopping magnetic field.

Now, does it fit in the Bible? Creationists such as Kent Hovind say that it does not. The result is that alternating magnetic fields are simply not possible. In fact, to quote him: “That’s simply baloney [that there are magnetic reversals in the rocks]. There are no ‘reversed polarity areas’ unless it’s where rocks flipped over when the fountains of the deep broke open. … This is a lie talking about magnetic ‘reversals.'” (Taken from his Creation Science Evangelism series, DVD 6:1.)

Alternatively, Russell Humphreys, of Answers in Genesis, accepts that there have been magnetic reversals, as he is able to fit it into a reading of the Bible. He explains the field reversals as rapidly taking place during the 40 24-hr days of Noah’s Flood. Hence, because they are able to fit it into the Bible, they accept it as a dogma.

A Look at the Scientific Method

You’ll notice that this flow chart is a tad larger:

Flow Cart Showing the Scientific Method

Flow Cart Showing the Scientific Method

It starts at the same place, with an idea/observation/etc., which we call a “hypothesis.” As opposed to testing this hypothesis against the Bible, it is tested by performing an experiment. In other words, can the idea that you have accurately predict the outcome of an experiment?

If not, then the idea is rejected. If it did accurately predict the outcome of the experiment, then ideally you will do several more and gather other observational evidence, but effectively you now have created a theory. A theory is when all pieces of evidence support that idea, and NO experiment has refuted it.

The next step of a theory is to use it to predict a future event. This is where my definition of science differs from the dictionary by adding these predictive properties (the bottom half of the flow chart). Without the theory of gravity being able to predict the motions of the planets and moons, the behavior of tides, etc., then what good is it other than to have on paper and look pretty?

So the theory is used to predict a future event. If it predicted it correctly, then you simply rinse and repeat. Much of basic scientific research is really just testing theories. Far from being the “dogma” that many creationists will want you to believe, theories are subjected to tests every day.

In fact, scientists WANT to be the one to do the experiment that the theory predicted a different outcome for. That’s where we follow the “NO” arrow on the flow chart. If the theory can be modified to support the latest evidence, then it is improved, and you go back and continue to test the now-modified theory. An example of this would be the addition of Inflation to the Big Bang model.

However, if the theory cannot be modified to support the latest evidence, then we have a scientific revolution. People remember your name. You get Nobel Prizes. And money. And women (or men). Anyone over the age of 10 knows Einstein’s name and know him to be synonymous with “Relativity” and likely even “E=m·c2.” Advertisers wish they could be that efficient.

Final Thoughts – What’s the Point, and Why No Spiritualism/Paranormal Allowed?

The point here is that, well, I’m honestly sick of hearing the anti-“darwinist” crowd claiming that evolution, the speed of light, the Big Bang, and many other scientific theories are just a “materialistic dogma.” They’re not. Plain and simple. Dogma is where you believe something as FACT and it cannot be shown to be false, regardless of any evidence. Theories and the scientific method is a process that requires evidence to support it, and no evidence to the contrary. It requires predictive power.

And that is why spiritualism/religion/supernatural/paranormal beliefs are simply not allowed in science. Sorry, they’re not. Why? Because almost by their very definition, they lack any predictive ability. If you can’t use your hypothesis or theory to predict a future event, then they have just been shown not to work. Yes, the Flying Spaghetti Monster may have created us all by touching us with His noodly appendage. That may be a hypothesis. But you simply can’t test that because He in His Infinite Carbalicious Goodness can just choose not to do it again. Or some vaguely-defined “Intelligent Designer” may have caused the bacterial flagllum to exist or have formed the mammalian eye. But that belief does not present any way of being tested, whereas evolutionary theory does (and has shown the precursors to all of those).

And that’s really the point of science: To use testable ideas to explain the where we came from, and then to predict where we’re going.

Create a free website or blog at WordPress.com.