Exposing PseudoAstronomy

January 29, 2014

Skeptiko Host Alex Tsakiris Compares Scientists’ Passivity with Wikipedia Editing to Christians’ Views on Abortion Clinic Bombings


Introduction

I’m getting my roughly one post per year about Alex Tsakiris, the host of the podcast Skeptiko, in early. In the past, I’ve written a lot about how Alex makes consistent mistakes about the scientific process and how science works in general. This post is not dedicated to that.

One of Alex’s high horses lately has been censorship (or perceived censorship), especially with respect to Wikipedia. On episode 236, “Rome Viharo, Wikipedia, We Have a Problem,” Alex talked extensively about the issue with respect to one of his heroes, Rupert Sheldrake.

The allegation is that Sheldrake’s Wikipedia page has been targeted by a few skeptics (he claimed by the Guerrilla Skeptic group, which has disavowed the Sheldrake editing). And, those skeptics have been acting in a most unfair way towards Rupert and his supporters. I don’t really want to get much more into this issue because it’s a side-issue for what I want to write about here. If you’re interested, listen to the episode.

What’s important for this blog post is just the basic context in which Alex makes two statements.

First Statement, ~27 min

The first of two statements I want to talk about starts about 27 minutes into the episode. I’m going to quote from Alex’s own transcript, which I haven’t verified, so it’s possible it contains an error or more (emphasis is mine):

I think that’s really the more interesting issue and I think we can sit on the sidelines and go, “Oh my gosh, isn’t this horrible and will things ever get better? These crazy skeptics!” The thing I always point out to people is the dogmatic skeptics, the fundamentalist Atheists, who these people represent, are really the tip of the spear for scientism. We always want to do like you did and say it’s really not a problem with science, is it? It’s a problem with scientific materialism. It’s a philosophical issue. No, forget it. It’s about science.

If we’re going to talk in general terms, science media has been completely co-opted by this point of view. The reason I’d come back and say it’s the tip of the spear is because you don’t see scientists rushing to the aid of Rupert Sheldrake just on principle saying, “Hey, this is a colleague of ours. This guy is clearly a biologist. He’s a Cambridge Fellow. We need to defend this.” No. They sit on their hands and silently cheer. Some of them sit on their hands and hope the arrow doesn’t point to them next.

So it’s really akin to what you were talking about with religious fundamentalism back when they were bombing abortion clinics. Of course there was an outcry of “Stop the violence” from other Christians. But there wasn’t too much of an outcry, right? There’s a lot of sympathy. “Well, we can certainly understand how upset people are by all those babies dying.”

So these frontline soldiers, these tip of the spear of an ideological debate, I think we have to be careful when we separate them and bifurcate and say, “Well, they don’t really represent science.” Yeah, I think they do. They form a pretty good representation of the crazy scientific materialism that really grips science as we know it right now. I don’t see any relief from that.

Wow. Logical fallacy of a false analogy, anyone? Alex is clearly making an analogy, saying that scientists not rushing to support Rupert Sheldrake and his Wikipedia page being edited is equivalent to Christians remaining quiet when abortion clinics are bombed in the name of Christianity. Not only is this a false analogy, it’s a fairly offensive one.

Here’s one way it’s wrong: Scientists, for the most part, have never heard of Rupert Sheldrake. Despite what Alex and Rome argued in the episode, Rupert Sheldrake by most measures would also NOT be considered a “practicing” or “active” scientist, or at least active biologist. I would guess that less than 1% of active scientists have ever heard of the guy — mostly the people who know of him are people in the paranormal field and skeptics. Of those very very few scientists who have heard of him, even fewer know what he does. Of those, even fewer actively scour Wikipedia to look up names. Of those, even fewer actively look at the Talk or History pages to even see if there have been lots of edits. and apparent “editing wars” going on.

So, we have a very small fraction of the population who are scientists, multiplied by a small fraction who have heard of Sheldrake, multiplied by a small fraction who know anything about him other than his name, multiplied by a small fraction who look at Wikipedia for him, multiplied by a small fraction who look at the Talk or History pages to investigate.

Compare that with the number of people who have heard of abortion. I knew what abortion was when I was twelve years old because it was a topic we could write on for a persuasive paragraph in English class. I would guess that by their teen years, almost everyone knows about abortion. But, let’s be generous and say that 50% of the global population knows what abortion is. Multiply that by around 2.1 billion Christians in the world. Multiply that by the fraction who read political news.

One of these is a bigger number than the other … my point is that there are an enormous number of Christians “in the know” about abortion clinic violence who can call it out, and there is a vanishingly small number of active scientists who know about Sheldrake, what he does, and what’s on his Wikipedia page, and what’s going on with the editing of it. Ergo, false analogy.

It’s a false analogy in another way because he has people speaking out about violence by people because of religion with respect to abortion clinics, and he’s comparing that with a guy throwing a hissy fit about what people are writing about him on the internet. Sorry Alex, but the bombing of an abortion clinic is a bigger deal to me than Sheldrake being unhappy that people point out on his Wikipedia page that he says and does a lot of stuff that is not supported by any reputable data.

Second Statement, ~44 min

The second statement I wanted to talk about for this post happens during Alex’s closing monologue, about 44 minutes into the episode.

What does this say about science? And I know I keep saying “science,” and people go, “Well, it’s not really science, science means this or science means that,” but I tend to disagree. I think this situation really speaks to the larger problem with the way science is applied. And I think – as I said in the show – the lack of support for Sheldrake, in a situation where the scientist should obviously be supported by his peers, speaks loudly and clearly that this is a problem with science in general. But maybe you disagree; I’d like to hear your opinion.

Well, I guess I was mistaken when I started this post and said it wasn’t going to deal with Alex’s lack of understanding of how science works. (And, that most scientists, and probably people in general, would not consider Sheldrake’s work in the last ~decade to be “science.” Doing an experiment on whether dogs are telepathic, or writing a book bemoaning what he calls “scientific dogma”, don’t count as “science” as far as most of us are concerned.)

First off, Alex Tsakiris is not a scientist. And, so far as I can tell, he has never taken a philosophy of science class. He is in no position to decide what is or is not science. When actual practicing scientists tell him he is wrong, that something is not science, he can of course disagree, but he will very likely be wrong. Yes, this is a bit of an argument from authority, but beware of the fallacy fallacy here — just because I used an argument from authority does not mean my argument is wrong.

Second, very, very rarely will scientists be drawn into any sort of public debate with respect to an actual scientist (as opposed to what Sheldrake is now) being “dissed” (my word). The most recent example I can think of would be Michael Mann and the huge amount of political pressure he faced in Virginia because of his research on climate change. Even then, I don’t remember many individual scientists coming forward to back him up, though I do seem to recall some professional scientific societies issuing statements about it. And Michael Mann was facing MUCH more pressure than Rupert Sheldrake: Political, social, financial harassment and threats versus a few people editing his Wikipedia page unfavorably.

Final Thoughts

I’m not sure there’s much else to say on this issue at this point. I decided to write this post when I heard Alex compare scientists remaining silent on Sheldrake’s Wikipedia page with Christians remaining silent on abortion clinic bombings. That was just so over-the-top and (I think) offensive that I wanted to put it out there so others knew about it.

The extra bit showing how Alex – yet again – does NOT understand how science works was gratuitous. But, as I said, I seem to consistently write about one post a year on something that strikes me about what Alex says on Skeptiko, so I got this year’s in early.

Advertisement

August 24, 2013

Skeptiko’s Standards of Evidence for Fairies


Introduction

I’ve written very roughly one post every year or so on Skeptiko host Alex Tsakiris and his absolute refusal to understand how science is done and why there is a disconnect between “true believers” and “skeptics.” Here are the four posts I’ve done (well, now five), and I recommend the 2010 and 2011 posts specifically. They’re long, but I think they’re very well written and from time-to-time I even go back and re-read them and just think, “Wow, that was really good!”

Anyway, enough self-praise. It was only in one of my posts, the 2011, that I discussed Alex’s derision with Carl Sagan’s famous, “Extraordinary claims require extraordinary evidence” phrase. In that post, I discussed it in the context of how a hypothesis is tested and may eventually become a theory.

But, I think it’s worth delving more into this now because in his latest, Episode 219, Alex, almost makes this the center point of the conversation between himself and Dr. Stephen Law of Centre for Inquiry UK.

And despite Dr. Law explaining it, Alex still does not get it.

The discussion starts about this very roughly at the 38 minute mark. Please note that I’m using Alex’s transcript for this, assuming that it’s correct. Though that might not be the case.

Extraordinary Claims Is Anti-Science

Alex first broaches this topic by stating the following:

“I see that as just an intellectually feeble kind of pronouncement. Extraordinary claims require extraordinary proof—that is anti-science, isn’t it? … We’ve built this whole institution of science, the whole process of peer-review, the whole process of self-correction around this idea that we will altogether discover what is real, what is not real, what is extraordinary, what is not extraordinary. So then the idea that after the fact, after the results come in, we say, “You know, that’s pretty interesting results but I deem that to be extraordinary; therefore, you need an extra level of proof on that.” I think it’s just silly.”

Dr. Law responded by giving an example of, if he claims that he has a cell phone and a car, no one would think twice about it. But if he says that he has a fairy that he can make dance on the end of his finger, then Alex would doubt that. It’s an extraordinary claim.

I thought that was pretty good. Alex agrees, but then switches it “back to science” (my phrase in quotes). The problem with this dismissal and then redirect is that it’s not a redirect. Every claim should be testable – that is science. Stating that you have a fairy dancing on your finger is a claim and it should be subjected to the same kind of testing that anything else would be. The fact that our daily experience says that fairies don’t exist means that the burden of evidence he needs to provide in order to counterbalance all the other evidence they don’t exist is higher. Ergo, extraordinary claims require extraordinary evidence.

Dr. Law states this later as: “It’s because the prior probability of anything like a fairy exists is very, very low indeed, knowing what we do.”

Alex’s Go-To Richard Wiseman Quote

I like Richard Wiseman. Well, I like a lot of his work — I’ve never met the guy. He is a Ph.D. psychologist. He is not a hard scientist. And like everyone – in spoken word or print – I’m sure he’s said some things that he didn’t quite mean or that he might think are true but no one else does.

I give that preface because this has been Alex’s go-to “stump the skeptics with an argument from authority” thing for years now, and in a test on “Who’s Bigger” Alex pulled it out and showed it to Dr. Law:

That’s British psychologist and parapsychology critic, Richard Wiseman, who has investigated probably more of these paranormal parapsychology claims like telepathy than just about anybody else. Here’s his quote: “I agree that by the standards of any other area of science that remote viewing (and he later added in this quote, ESP) is proven. But that be[g]s the question: do we need higher standards of evidence when we study the paranormal?” …

He is talking about creating another level of proof, a completely arbitrary level of proof based on his beliefs of what is extraordinary in terms of a claim and extraordinary in terms of proof. There’s no way to intellectually defend the statement.

The conversation then goes into a direction I think it shouldn’t have, I think that Dr. Law should have come back very forcefully against Richard’s statement and pointed out (correctly) that Alex is using one psychologist’s opinion as a stand-in for all skeptics and all scientists (and all scientists who are skeptics).

If it wasn’t obvious yet, I disagree with what Wiseman said. Heck, Penn and Teller showed in 20 minutes that remote viewing (in one example) is utter bull.

I think that Dr. Law needed to return to the “definition” of an “extraordinary claim.” After all, my recollection is that Sagan was using it as a simple example of how science is actually done and how we should weight evidence. So I’ll repeat it because Dr. Law did not: An “extraordinary claim” is only extraordinary when there is a large amount of evidence already that it does NOT exist. Ergo, to demonstrate that it does, you have a much larger burden of evidence not only to demonstrate that it does exist, but to demonstrate why the evidence that it doesn’t exist does not stand up to your new evidence that it does.

And we do do this all the time in science. I think I’ve talked before about the “granola bar” model for Saturn’s rings, that after Voyager we thought that ring particles resided in density waves that could be modeled as granola bars of high density material with nothing between them. That explained the observations well, and there was no reason to change it. But evidence mounted that could not be explained by the simple granola bar model and after enough did, we have a new paradigm of how the ring particles are distributed that can explain both the new evidence AND the old Voyager evidence. That’s what you need here.

Where the Conversation Actually Did Go

Unfortunately, Dr. Law made the mistake of stating, “Maybe your view is that there’s already an awful lot of evidence in for the existence of psychic powers, say.”

So Alex whipped out his Big Gun again and quoted Wiseman. Again, Dr. Law did not take that opportunity to call out the argument from authority but instead said, “I can’t comment on that because I’m not an expert on that area of science. But let’s suppose that that’s true. I guess what Wiseman is saying here—and that might be true for all I know.” He went on to talk about how scientists have been fooled before by tricksters (such as Uri Geller), that scientists are in fact one of the easier groups of people to fool because we have built up over the decades exact methods of observation that we expect should yield objective results, and magicians sneak in around the edges and take advantage of what we expect.

Dr. Law clearly has not argued (or at least was not prepared to argue) with Alex and explain his point so Alex might grasp it, because he then stated, “So you have to be extra, extra specially careful when it comes to investigating those kinds of things. I can’t believe that you would disagree with me about that.”

I just had to shake my head at that. It’s Alex’s entire point: He doesn’t think you should have to take extra measures, hence his lack of comprehension of “extraordinary claims” and “extraordinary evidence.” And of course, Alex took that bait:

“Intellectual black hole alert. Dr. Law, this is exactly what you preach against is that we’re going to layer on top of this without any proof, without any evidence. If that’s your claim, then someone needs to prove that, as they’ve tried to do so many times and as the social sciences…”

When Dr. Law responded that “they have proved it,” Alex retreated. Unfortunately for Dr. Law, Alex is very good at recovering and redirecting where he wants to go. The rest of the “interview” is very much continued re-direction which didn’t really accomplish anything and Alex tries to stop the interview, though Dr. Law insists on trying to make his point once more, this time with a perpetual motion machine. Alex’s very patronizing response is what I think I’ll finish this post up with:

Well, Stephen, I just beg to differ. I don’t think you’re intimately familiar with the data.

Final Thoughts

Ultimately, it is about the data. It is about the data both for and against a phenomenon and how it all fits together and how it balances out. An “extraordinary claim” is not something that has a set definition, or that some Elevated Council of Elders gets to decide what fits into it. It’s something for which there’s simply a lot of evidence against.

The maxim, “Extraordinary claims require extraordinary evidence,” I think, is a good way to concisely describe this simple concept. And it’s one that after over 200 episodes of Skeptiko, Alex Tsakiris still refuses to understand. And I use those words purposely: This concept has been explained numerous times to Alex, so by this point, it I can only conclude that it is a willful choice to not understand.

 

P.S. It looks like someone has used a lot of my blog posts on Alex’s RationalWiki page. Anyone a good wiki editor want to fit this in somehow, get more directs to my blog? 🙂

P.P.S. Alex actually posted in my comments twice in my 2010 post (search for “Comment by Alex”). He said he would be “happy to engage/discuss” yet when I agreed, nothing. I would repeat now, for the record, that I am fully willing to go on Skeptiko and discuss the specific points that I have made in any of my posts. After all, these get to the heart of why there is a disconnect between so-called “skeptics” and “believers,” which is supposedly what Alex went into Skeptiko to try to understand and bridge.

February 17, 2012

Is Skeptiko Host Alex Tsakiris a Willful Deceiver?


Introduction

I like to look at my stats page on WordPress to see what sites link up to me, how people get here. I thoroughly enjoy when RationalWiki links to me, using me as a source, and I recently found out that a new Alex Tsakiris page has an entire section devoted to my analysis of where Alex goes wrong in his attempts to argue scientifically.

I also saw that a blog, much much more popular than mine by Alex’s latest guest was linking to me – a blog by Dr. Jerry Coyne, an “outspoken atheist” according to how Alex bills him. In perusing the comments on that blog (which is where someone had linked to one of my previous posts), I noticed a claim that Alex’s transcripts were deliberately altered to change the guest’s meaning.

Update 2 days later: The errors referred to below have now been corrected.

On Transcripts

Now, a bit of explanation — Alex, on his Skeptiko website, provides transcripts of his interviews for every episode. In fact, I have used them before though I’ll note that in the body of every post where I talk about Alex, I have written my own transcript of the episode.

Several podcasts provide transcripts (my own, Skeptoid, Astronomy Cast, just to name a few), and I think they’re a valuable service. I understand that making transcripts of an interview is somewhat different than making one for a podcast episode that you’ve scripted out (which is why I don’t do transcripts of my own interview episodes). When I do transcripts, I simply copy-and-paste from what I wrote for the episode into the web page.

Evidence of Fraud?

However, it appears as though Alex – or whomever does his transcripts – has willfully and without notice altered his guest’s words.

Here is a screenshot that I took on Friday evening, February 17, 2012, of the episode with Dr. Coyne, #161

Alex Tsakiris' Transcript for Skeptiko Episode #161

Alex Tsakiris' Transcript for Skeptiko Episode #161

For word-searchability, here is the quote, copied-and-pasted from Alex’s site:

Alex Tsakiris: I’m just saying if they’re saying at the fundamental level of physics non-local theories are incompatible with what we observe, then I think it calls into question the things that we’re talking about in terms of Materialism, Determinism. Isn’t that the direct implication of what they’re saying?

Dr. Jerry Coyne: No! No, because they’re talking about what happens in a very, very tiny micro level. It does not mean that you can’t predict what happens when billiard balls hit each other on a billiard table for which quantum mechanics is perfectly applicable. It’s as if you’re saying we can’t play billiards and we can’t shoot rockets to the moon because of this stuff that happens on a micro level.

The fact is that assuming that these phenomena apply on most of the levels of reality that we deal with renders everything wrong is simply incorrect. For most micro-phenomenon you’re turning to quantum mechanics. It works fine. And in terms of evolution I don’t see how this quantum mechanics affects evolution at all. I mean, maybe it can affect mutation. You said that these people say that but that turned out to be something you made up. I don’t see how it can and even if it did it would not by any means render mutations non-random in the way that evolution has to mean that they’re random.

Reading that, as someone with a physics background Dr. Coyne is mistaken. Billiard balls hitting each other is obviously a Newtonian issue, otherwise if classical mechanics could not explain what goes on in a basic game of pool, then Quantum Mechanics would have been developed centuries before it was.

In the second paragraph, the statement about QM applying to micro-phenomenon is mostly correct (I’d argue nano, but whatever), though it doesn’t really lead logically into what he says next about QM not affecting evolution. It’s a totally new idea.

Tonight, I listened to the episode. At 12 minutes 58 seconds, I started to record my own transcript of what Dr. Coyne stated:

Dr. Jerry Coyne: No! No, because they’re talking about what happens in a very, very tiny micro level. It does not mean that you can’t predict what happens when billiard balls hit each other on a billiard table for which Newtonian mechanics is perfectly applicable. I mean, it’s as if you’re saying that we can’t play billiards or we can’t shoot rockets to the moon because of this stuff that happens on a micro level.

The fact is that assuming that these phenomena apply on most of the levels of reality that we deal with renders everything wrong is simply incorrect. I mean, for most macro-phenomenon, Newtonian or classical mechanics works fine. Um, and in terms of evolution I don’t see how this quantum mechanics affects evolution at all. I mean, maybe it can affect mutation. You said that these people say that but that turned out to be something you made up. Um, I don’t see how it can and even if it did it would not by any means render mutations non-random in the way that evolution has to mean that they’re random.

Notice a difference? Yeah. Whoever wrote the transcript on Alex’s site changed out “Newtonian mechanics” for “quantum mechanics” when talking about billiards, and changed “macro-phenomenon, Newtonian or classical mechanics works fine” to “micro-phenomenon you’re turning to quantum mechanics. It works fine.”

‘Cause, you know, they sound so similar.

Someone even has pointed this out to Alex on his comments for that episode:

Correcting Alex's Transcript for Skeptiko Episode #161

Correcting Alex's Transcript for Skeptiko Episode #161

There may be other examples — I chose not to listen to the entire hour-long episode with an eye on Alex’s transcript. Let me know if there are other examples. I really don’t know what to think on this one. I like to give people the benefit of the doubt. I was hoping that despite my two very extensive blog posts on Alex (here and here) that Alex just had his head down a rabbit hole, he’d drunk the Kool-Aid®, he was a true believer who sorta meant well and was blinded by his beliefs, etc. This, however — changing your own guest’s words in a transcript — changes things.

Hounding Alex on the Forum

Alex has a forum thread for this episode. On it, Alex was notified of the problems in the transcript here:

Where are Transcript Corrections

Where are Transcript Corrections

Alex acknowledged them about an hour later, and claimed he corrected them here:

Alex Acknowledging Corrections

Alex Acknowledging Corrections

So I went back to the page for the episode, reloaded it, cleared the cache and reloaded it and … the errors are still there. And as of adding this sentence, 19 hours later, the errors are still there. I’m not sure what Alex meant by “all better.”

Final Thoughts – You Make Up Your Own Mind on This One

I’m not saying these are huge issues and evidence of a conspiracy nor anything like that. It’s possible that in an hour-long program and longer spent writing a transcript, errors will happen. I’m somewhat willing to extend the benefit of the doubt on this. But when an error is pointed out and then is said to have been corrected but it hasn’t been, that brings it to a new level.

So, what do you think, folks?

And I’ll note that if the transcript ever does get updated (and someone lets me know – I’m only going to check it for another day or so), then I’ll of course update this post to let people know.

December 9, 2011

Skeptiko Host Alex Tsakiris on Monster Talk / Skepticality, and More on How to Spot Pseudoscience


Introduction

A few weeks ago, I learned that the popular Monster Talk podcast would be interviewing Skeptiko podcast host, Alex Tsakiris. They ended up later posting it instead on their Skepticality podcast feed, and the interview also was episode 153 of Skeptiko; it came out about two weeks ago. The interviewers from Monster Talk are Blake Smith, Ben Radford, and Karen Stollznow (the last of whom I have the pleasure of knowing). Got all that?

If the name Alex Tsakiris sounds familiar but you can’t quite place it and you’re a reader of this blog, you probably recognize it from the two previous posts I’ve written about him on this blog. The first was on the purpose of peer-review in science because Alex (among others) were talking about how peer-review was a flawed process and also that you should release results early without having a study completed.

Fourteen months later, I wrote another post on Alex, this one being rather lengthy: “Skeptiko Host Alex Tsakiris: On the Non-Scientifically Trained Trying to Do/Understand Science.” The post garnered a lot of comments (and I’ll point out that Alex posted in the comments and then never followed-up with me when he said he would … something he accuses skeptics of not doing), and I think it’s one of my best posts, or at least in the top 10% of the ~200 I’ve written so far.

This post should be shorter than that 2554-word one*, despite me being already in the fourth paragraph and still in the Introduction. This post is further commenting on not the actual substance of Alex Tsakiris’ claims, but rather on the style and format and what those reveal about fundamental differences between real scientists and pseudoscientists. I’m going to number the sections with the points I want to make. Note that all timestamps below refer to the Skeptiko version.

*After writing it, it’s come out to 3437 words. So much for the idea it’d be shorter.

Point 1: Establishing a Phenomenon Before Studying It

About 8 minutes into the episode, Karen talks with Alex about psychics, and Alex responds, “If you’re just going to go out and say, as a skeptic, ‘I’m just interested in going and debunking a psychic at a skeptic [sic] fair,’ I’m gonna say, ‘Okay, but is that really what you’re all about?’ Don’t you want to know the underlying scientific question?”

Alex raises an interesting point that, at first glance, seems to make perfect sense. Why belittle and debunk the crazies out there when you could spend your valuable time instead investigating the real phenomenon going on?

The problem with this statement – and with psi in general – is that it is not an established phenomenon that actually happens. Psi is still in the phase where it has yet to be conclusively shown to exist under strictly controlled situations, and it has yet to be shown to be reliable in its predictions/tenants. By this, I mean that psi has yet to be shown to be repeatable by many independent labs and shown to be statistically robust in its findings. I would note the obvious that if it had been shown to be any of these, then it would no longer be psi/alternative, it would be mainstream.

Hence, what the vast majority of skeptics are doing is going out and looking at the very basic question of does the phenomenon exist in the first place? If it were shown to exist, then we should spend our time studying it. Until then, no, we should not waste time trying to figure out how it happens. This really applies to pretty much everything, including UFO cases. In that situation, one has to establish the validity by exploring the claims before one looks at the implications, just like with alleged psychics.

A really simple if contrived example is the following: Say I want to study life on Io, a moon of Jupiter. I propose a $750 million mission that will study the life there with cameras, voice recording, chemical sensors, the works. I would propose to hire linguists to try to figure out what the beings on Io are saying to the probe, and I’d propose to hire biologists to study how they could survive on such a volcanic world. NASA rejects my proposal. Why? Because no one’s shown that life actually exists there yet, so why should they spend the time and money to study something they don’t know is actually there? And, not only that, but Io is so close to Jupiter that it’s bathed in a huge amount of radiation, and it is so volcanically active that it completely resurfaces itself every 50 years, making even the likelihood of life existing there very slim.

Point 2: Appeal to Quantum Mechanics

I’ll admit, I have a visceral reaction whenever I hear a lay person bring up quantum mechanics as evidence for any phenomenon not specifically related to very precisely defined physics. At about 12.5 minutes into the episode, Alex states quite adamantly that materialism (the idea that everything can be explained through material things as opposed to an etherial consciousness being needed) “is undermined by a whole bunch of science starting with quantum mechanics back a hundred years ago … .”

It’s really simply basically practically and all other -ly things untrue. Alex does not understand quantum mechanics. Almost no lay person understands quantum mechanics. The vast majority of scientists don’t understand quantum mechanics. Most physicists don’t understand quantum mechanics, but at least they know to what things quantum mechanics applies. Alex (or anyone) making a broad, sweeping claim such as he did is revealing more their ignorance of science than anything else.

Unless I’m mistaken and he has a degree in physics and would like to show me the math that shows how quantum mechanics proves materialism is wrong. Alex, if you read this, I’d be more than happy to look at your math.

You will need to show where quantum mechanics shows that consciousness – human thoughts – affect mater at the macroscopic level. Or, if you would like to redefine your terms of “consciousness” and “materialism,” then I will reevaluate this statement.

(For more on quantum mechanics and pseudoscience, I recommend reading my post, “Please, Don’t Appeal to Quantum Mechanics to Propagate Your Pseudoscience.”)

Point 3: Appeal to Individual Researchers’ Results Is a Fallacy

A habit of Alex is to relate the results of individual researchers who found the same psi phenomenon many different times in many different locations (as he does just after talking about quantum mechanics, or about 45 minutes into the episode where they all discuss this, or throughout the psychic detective stuff such as at 1:30:30 into the episode). Since I’ve talked about it at length before, I won’t here. Succinctly, this is an argument from authority, plain and simple. What an individual finds is meaningless as far as general scientific acceptance goes. Independent people must be able to replicate the results for it to be established as a phenomenon. The half dozen people that Alex constantly points to does not trump the hundreds of people who have found null results and the vast amount of theory that says it can’t happen (for more on that, see Point 6).

For more on this, I recommend reading my post on “Logical Fallacies: Argument from Authority versus the Scientific Consensus” where I think I talk about this issue quite eloquently.

It’s also relevant here to point out that a researcher may have completely 100% valid and real data, but that two different people could reach very different conclusions. Effectively, the point here, which is quite subtle, is that conclusions are not data. This comes up quite dramatically in this episode about 22.3 minutes in when discussing the “dogs that know” experiment; in fact, my very point is emphasized by Ben Radford at 24 min 05 sec into the episode. For more on this sub-point, I recommend reading my post from last year‘s Point 1.

Point 4: Investigations Relying on Specific Eyewitness Memories Decades After the Fact = Bad

The discussion here starts about 36 minutes into the episode, stops, and resumes briefly about 50 minutes in, and then they go fully into it at 1 hour 13 minutes in*. For background, there is a long history of Alex looking into alleged psychic detectives, and at one point he was interviewing Ben Radford and they agreed to jointly investigate Alex’s best case of this kind of work and then to hash out their findings on his show. This goes back to 2008 (episode 50), but it really came to a head with episode 69 in mid-2009 where they discussed their findings.

Probably not surprisingly, Alex and Ben disagreed on the findings and what the implications were for psychic detectives (Nancy Weber in this case). If you are genuinely interested in this material, I recommend listening to the episodes because there is much more detail in there than I care to discuss in this quickly lengthening post. The basic problems, though, were really two-fold — Ben and Alex were relying on police detectives remembering specific phrases used by the alleged psychic from a case almost 30 years old (from 1982), and they disagreed on what level of detail counted as a “hit” or “miss.”

For example, when Ben talked with the detectives, they had said the psychic told them the guy was “Eastern European” whereas they had separately told Alex that she had told them the guy was “Polish.” Alex counted it as a hit, Ben a miss. I count it as a “who knows?” Another specific one they talk about in this interview is “The South” versus “Florida” with the same different conclusions from each.

To these points, both scientists and skeptics (and hopefully all scientists are appropriately skeptical, as well) I think can learn a lot when looking into this type of material.

First, I personally think that this was a foolish endeavor from the get-go to do with an old case. Effectively every disagreement Ben and Alex had was over the specific phrasing which, unless every single thing the alleged psychic says is recorded, you are never going to know for sure what she said. Human memory simply is not that reliable. That is a known fact and has been for many years (sources 1 and 2, just to name a couple). Ergo, I think the only proper way to investigate this kind of phenomenon where you have disagreements between skeptics and other people is to wait for a new case and then document every single part of it.

Second, one needs to determine a priori what will count as a hit or miss (“hit” being a correct prediction, “miss” being wrong). In the above example, if they had agreed early on that Nancy Weber only needed to get the region of the planet correct, then it would be a hit. If she needed to get the country (first example) or state (second example) correct, it would be a miss under what the detectives told Ben. This latter point is the one that is more relevant in scientific endeavors, as well. Usually this is accomplished through detailed statistics in objective tests, but in qualitative analyses (more relevant in things like psychiatric studies), you have to decide before you give the test what kinds of answers will be counted as what, and then you have to stick with that.

It should be noted that hits vs. misses was not the actual crux of the disagreement, however. It was the level of specificity the psychic claimed (“Polish”) versus what the detectives told Ben they remembered (“Eastern European”), and then the broader picture to how well that information will help solve a case.

I actually encounter the same thing when grading essays. This is one reason why teachers in science classes like multiple-choice questions more than essays (besides the time it takes to grade): It’s much more quantitative to know the answer is (A) as opposed to parsing through an essay looking for a general understanding of the question being asked.

*I’ll warn you that this goes on for about a half hour and it’s somewhat difficult to listen to with all the shouting going on. If you’re scientifically/skeptically minded, listening to this is going to make you want to smack Alex. If you’re psi/alternative minded, listening to this is going to make you want to smack Ben. This is why I try not to get into the specifics of the exact case but rather point out the process and where the process is going wrong here.

Point 5: Confusing Different Causes for a Single Effect

About 41 minutes into the episode and then for several minutes on, the conversation turned to the idea that psychics help with the grieving process. The reaction from me (and then the hosts) was pretty much, “Duh!” As Blake points out just before the 43 minute mark, “How many times did the [psychic] say, ‘Oh gee! That person’s in Hell!'” Thus, probably, not helping the grieving process.

The conversation steered along the lines of the three hosts of Monster Talk trying to point out that yes, the effect of the alleged psychic talking with the grieving person is that the grieving person felt better. But was the cause (a) because the person was actually psychic, or (b) because the person was telling the grieving people what they wanted to hear that their loved one was happy and still with them and they would join them when they died?

Alex obviously is of the former opinion (after pulling out yet another argument from authority that I talked about in Point 3 above). The others are of the latter. But the point I want to pull from this is something that all scientists must take into account: If they see an effect, there could be causes other than or in addition to their own preferred explanation. That’s really what this case that they talk about boils down to.

For example, we want to know how the moon formed. There are many different hypotheses out there including it formed with Earth, it was flung off Earth, it was captured, it was burped out, or a Mars-sized orbit crashed into Earth and threw off material that coalesced into the moon. I may “believe” in the first. Another person may in the last. We both see the same effect (the moon exists and has various properties), but how we got that effect probably only had one cause. Which one is more likely is the question.

Point 6: It’s Up to the Claimant to Provide the Evidence

I know I’ve discussed this before, but I can’t seem to find the post. Anyway, this came up just before the 52 minute mark in the episode, that Alex frequently states it’s up to the debunkers to debunk something, not for the claimant to prove it. (To be fair, in this particular interview, Alex kinda says he never said that at first, he only says it when it’s a paradigm shift kinda thing that’s already shifted … which it so has not in this case. But then he does say it …)

Blake: I think most skeptical people believe that whenever you’re making a claim that you have the burden of proof every time. And it never shifts …

Alex:… And they’re wrong because that’s not how science works. Science works by continually asking hard, tough questions and then trying to resolve those the best you can.

I’m really not sure where Alex gets this first sentence (the second sentence is correct, but it and the first are not mutually exclusive). It’s simply wrong. In no field is this a valid approach except possibly psi from Alex’s point of view. If you make a claim, you have to support it with evidence that will convince people. If I say I can fly, it shouldn’t be up to you to prove I can’t, it should be up to me to prove I can. It’s that simple. And Alex gets this wrong time after time.

This is further evidence (see Point 2 above) that Alex has no actual concept of science and how it works. And before you accuse me of ad hominems, I’m stating this in an objective way from the data — his own statements that have not been quote-mined (go listen to the episodes yourself if you don’t believe me).

But it continues:

Ben: So who does have the burden of proof?

Alex: Everybody has the burden of proof and that’s why we have scientific peer-reviewed journals, the hurdles out there that you have to overcome to establish what’cha know and prove it in the best way you can. It gets back to a topic we kinda beat to death on Skeptiko and that’s this idea that also hear from skeptic [sic], ‘Extraordinary claims require extraordinary proof.’ Well of course that’s complete nonsense when you really break it down because scientifically the whole reason we have science is to overcome these biases and prejudices that we know we have. So you can’t start by saying ‘Well, I know what’s extraordinary in terms of a claim, and I know what would be extraordinary in terms of a proof,’ well that’s counter to the idea of science. The idea of science is it’s a level playing field, everybody has to rise above it by doing good work and by publishing good data.

(Ben Radford corrects Alex on this point about 54.7 minutes into the episode; feel free to listen, but also know that the points he makes are not the ones I do below. Well, maybe a bit around 56 minutes.)

I know I’ve talked about this before, but not in these exact terms. What Alex is talking about – and getting wrong – without actually realizing it is how a hypothesis becomes a theory and the lengths one has to go to to overturn a theory. That’s what this nugget boils down to.

If you’re not familiar with the basic terminology of what a scientist means by a fact, hypothesis, theory, and law, I recommend reading one of my most popular posts that goes into this. The issue at hand is that it is effectively established theory that, say, people cannot psychically communicate with each other (yes, I know science can’t prove a negative and there’s no Theory of Anti-Psi, but go with me on this; it’s why I said “effectively”). Even if it’s not an exact theory, there are others that are supported by all the evidence that show this isn’t possible nor plausible.

Ergo, to overturn all those theories that together indicate psi can’t happen, you have to have enough convincing and unambiguous data to (a) establish your phenomenon and (b) explain ALL the other data that had backed up the previous theories and been interpreted to show psi is not real.

This is summarized as, “Extraordinary claims require extraordinary evidence.” That’s the phrase, not “proof,” which in itself shows yet again that Alex misses some fundamental tenants of science: You can never prove anything 100% in science, you can only continue to gather evidence to support it. “Proof” does not exist, just like “truth,” as far as science is concerned.

Final Thoughts

Well, this post ended up longer than I had initially planned, and it took several hours not the least of which is because I listened to the episode twice and it’s almost two hours long. I hope that through this I’ve been able to illustrate several points that you and everyone needs to watch out for when evaluating claims.

To quickly recap:

  1. You need to establish that a phenomenon exists before studying it.
  2. Don’t appeal to quantum mechanics unless you actually know what quantum mechanics is.
  3. A single or small group of researchers’ results are not convincing, no matter who they are.
  4. If you want to study something that supposedly happens every day, don’t choose an example that’s 30 years old.
  5. A single effect can have multiple or different causes, including one that you don’t like.
  6. The person making the claim has the burden of evidence … always.

In the end, I’ll admit that this was personally hard to listen to in parts. I took issue with Alex constantly refusing to admit certain things like the detectives saying one thing to him and another to Ben and saying Ben was lying about it and that he should say (what he didn’t say) to the detectives’ faces. That was just hard to listen to. Or Alex’s refusal to directly answer some questions in ways that would have made a politician proud. Another point that was hard to listen to but oh so sweet in the end was Alex claiming that Karen had invited him on but Karen said that Alex had invited himself on. Alex insisted that wasn’t true and said Karen was wrong and he had the transcript … and then a few seconds later the transcript was read and Alex clearly had invited himself onto their show.

But, those are my personal and more emotional observations after listening to this. Do those change what we can learn about the scientific process and where pseudoscientists go wrong? No. Alex Tsakiris continues to unwittingly provide excellent examples of how not to do science.

May 29, 2010

Skeptiko Host Alex Tsakiris: On the Non-Scientifically Trained Trying to Do/Understand Science


Preamble

First, let me give one announcement for folks who may read this blog regularly (hi Karl!). This may be my last post for about a month or so. As you may remember from my last post, I will be teaching all next month, June 1 through July 2, and the class is every day for 95 minutes. I have no idea how much free time I may have to do a blog post, and I have some other projects I need to finish up before the end of the month (I’m also a photographer and I had a bride finally get back to me about photos she wants finished).

Introduction

I have posted once before about Skeptiko podcast host Alex Tsakiris in my post about The Importance of Peer-Review in Science. The purpose of that post was to primarily show that peer review is an important part of the scientific process, a claim contrary to what the host of said podcast had claimed.

Now for the official disclaimer on this post: I do not know if Alex is a trained scientist. Based on what he has stated on his podcast, my conclusion is that he is not. What I have read of his background (something like “successful software entrepreneur” or around those lines) supports that conclusion. However, I don’t want to be called out for libel just in case and so that is my disclaimer.

Also, I am not using this post to say whether I think near-death experiences are a materialistic phenomenon or point to a mind-brain duality (mind/consciousness can exist separately from brain). That is NOT the point of this post and I am unqualified to speak with any authority on the subject (something I think Alex needs to admit more often).

Anyway, I just completed listening to the rather long Skeptiko episode #105 on near-death experiences with Skeptics’ Guide to the Universe host Steven Novella Dr. Steven Novella (see Points 2 and 3 below for that “Dr.” point). I want to use that episode to make a few points about how science is done that an (apparently) non-scientifically-trained person will miss. This post is not meant to be a dig/diss against so-called “citizen science,” rather the pitfalls of which non-scientists should be aware when trying to investigate pretty much ANY kind of science.

Point 1: Conclusions Are Not Data

Many times during the episode’s main interview and after the interview in the “follow-up,” Alex would talk about a paper’s conclusions. “The researchers said …” was a frequent refrain, or “In the paper’s conclusions …” or even “The conclusions in the Abstract …” I may be remembering incorrectly, perhaps someone may point that out, but I do not recall any case where Alex instead stated, “The data in this paper objectively show [this], therefore we can conclude [that].”

This is a subtle difference. Those of you who may not be scientifically trained (or listened to Steve’s interview on the episode) may not notice that there is an important (though subtle) difference there. The difference is that the data are what scientists use to make their conclusion. A conclusion may be wrong. It may be right. It may be partially wrong and partially right (as shown later on with more studies … more data). Hopefully, if there was not academic fraud, intellectual dishonesty, nor faulty workmanship (data gathering methods), the actual data itself will NEVER be wrong, just the conclusions from it. In almost any paper — at least in the fields with which I am familiar — the quick one-line conclusions may be what people take away and remember, but it’s the actual data that will outlive that paper and that other researchers will look at when trying to replicate, use in a graduate classroom, or argue against.

I will provide two examples here, both from my own research. The first is from a paper that I just submitted on using small, 10s to 100s meter-sized craters on Mars to determine the chronology of the last episodes of volcanism on the planet. In doing the work, there were only one or two people who had studied it previously, and so they were obviously talked about in my own paper. Many times I reached the same conclusion as they in terms of ages of some of the volcanos, but several times I did not. In those cases, I went back to their data to try to figure out where/why we disagreed. It wasn’t enough just to say, “I got an age of x, she got an age of y, we disagree.” I had to look through and figure out why, and whether we had the same data results and if so why our interpretations differed, or if our actual data differed.

The second example that’s a little better than the first is with a paper I wrote back in 2008 and was finally published in a special edition of the journal Icarus in April 2010 (one of the two main planetary science journals). The paper was on simulations I did of Saturn’s rings in an attempt to determine the minimum mass of the rings (which is not known). My conclusion is that the minimum mass is about 2x the mass inferred from the old Voyager data. That conclusion is what will be used in classrooms, I have already seen used in other peoples’ presentations, and what I say at conferences. However, people who do research on the rings have my paper open to the data sections, and I emphasize the “s” because in the paper, the data sections (plural) span about 1/2 the paper, the methods section spans about 1/3, and the conclusions are closer to 1/6. When I was doing the simulations, I worked from the data sections of previous papers. It’s the data that matters when looking at these things, NOT an individual (set of) author(s).

Finally for this point, I will acknowledge that Alex often repeats something along the lines of, “I just want to go where the data takes us.” However, saying that and then reading a paper’s conclusions are not mutually compatible. Steve pointed that out at least twice during the interview. At one point in the middle, he exclaimed (paraphrasing), “Alex, I don’t care what the authors conclude in that study! I’m looking at their data and I don’t think the data supports their conclusions.”

Point 2: Argument from Authority Is Not Scientific Consensus

In my series that I got about half-way through at the end of last year on logical fallacies, I specifically avoided doing Argument from Authority because I needed to spend more time on it versus the Scientific Consensus. I still intend to do a post on that, but until then, this is the basic run-down: Argument from Authority is the logical fallacy whereby someone effectively states, “Dr. [so-and-so], who has a Ph.D. in this and is well-credentialed and knows what they’re doing, says [this], therefore it’s true/real.”

If any of my readers have listened to Skeptiko, you are very likely familiar with this argument … Alex uses it in practically EVERY episode MULTIPLE times. He will often present someone’s argument as being from a “well-credentialed scientist” or from someone who “knows what they’re doing.” This bugs the — well, this is a PG blog so I’ll just say it bugs me to no end. ALL BECAUSE SOMEONE HAS A PH.D. DOES NOT MEAN THEY KNOW WHAT THEY’RE DOING. ALL BECAUSE SOMEONE HAS DONE RESEARCH AND/OR PUBLISHED DATA DOES NOT MEAN THEIR CONCLUSIONS ARE CORRECT OR THAT THEY GATHERED THEIR DATA CORRECTLY.

Okay, sorry for going all CAPS on you, but that really cannot be said enough. And Alex seems to simply, plainly, and obviously not understand that. It is clear if you listen to practically any episode of his podcast, especially during any of the “psychic dogs” episodes or “global consciousness” ones. It was also used several times in #105, including one where he explicitly stated that a person was well-credentialed and therefore knows what they’re doing.

Now, very briefly, a single argument from someone does not a scientific consensus make. I think that’s an obvious point, and Steve made it several times during the interview that there is no consensus on the issue and individual arguments from authority are just that — arguments from authority and you need to look at their data and methods before deciding for yourself whether you objectively agree with their conclusions.

Edited to Add: I have since written a lengthy post on the argument from authority versus scientific consensus that I highly recommend people read.

Point 3: Going to Amazon, Searching for Books, to Find Interview Guests

Okay, I’ll admit this has little to do with the scientific process on its face, but it illustrates two points. First, that Alex doesn’t seem to understand the purpose/point of scientific literature, and second that fast-tracking the literature and doing science by popular press is one of the worst ways and a way that strikes many “real” scientists as very disingenuous. I’ll explain …

First, I will again reference my post, “The Importance of Peer-Review in Science.” Fairly self-explanatory on the title, and I will now assume that you’re familiar with its arguments. In fact, I just re-read it (and I have since had my own issues fighting with a reviewer on a paper before the journal editor finally just said “enough” and took my side).

To set the stage, Alex claims in the episode:

“Again, my methodology, just so you don’t think I’m stacking the deck, is really simple. I just go to Amazon and I search for anesthesia books and I just start emailing folks until one of them responds.”

As I explained, peer-reviewed papers are picked apart by people who study the same thing as you do and are familiar with other work in the area. A book is not. A book is read by the publishing company’s editor(s) – unless it’s self-published in which case it’s not even read by someone else – and then it’s printed. There is generally absolutely zero peer-review for books, and so Alex going to Amazon.com to find someone who’s “written” on the subject of near-death experiences will not get an accurate sampling. It will get a sampling of people who believe that near-death experiences show mind-brain duality because …

Published books on a fringe “science” topic are done by the people who generally have been wholeheartedly rejected by the scientific community for their methods, their data-gathering techniques, and/or their conclusions not being supported by the data. But they continue to believe (yes, I use the word “believe” here for a reason) that their interpretations/methods/etc. are correct and hence instead of learning from the peer-review process and tightening their methods, trying to bring in other results, and looking at their data in light of everything else that’s been done, they publish a book that simply bypasses the last few steps of the scientific process.

Not to bring in politics, but from a strictly objective point, this is what George W. Bush did with the US’s “missile defense” system. Test after test failed and showed it didn’t work. Rather than going back and trying to fix things and test again, he just decided to build the thing and stop testing.

Point 4: Confusing a Class of Outcomes with a Single Cause

This was more my interpretation of what Alex did in the interview and what Steve pointed out at many times, and it is less generalizable to the scientific process, but it does apply nonetheless.

Say, in cooking, you serve up a pizza. The pizza is the “class of experiences” here that is the same as a class of things that make up the near-death experience (NDE). The toppings of your pizza are the individual experiences of the NDE. Pizzas will usually have cheese, NDEs will usually have a sense of well-being. Pizzas may more rarely have onions, NDEs may more rarely have a white light tunnel associated with them. You get the idea.

Now, from the impression I got, Alex seemed to claim throughout the episode that there was only one way to make a pizza — have an NDE. Steve argued that there were many different ways to make a pizza, and that all those different techniques will in general lead to something that looks like a pizza.

Point 5: Steve’s a Neurologist, Alex Is Not

I need to say before I explain this point that I am NOT trying to say that you need a Ph.D. in the topic to do real science. I do not in ANY WAY mean to imply that science is an elitist thing where only people “in the club” can participate.

That said, I really am amazed by Alex arguing against people who actually have studied the subject for decades. If you are a non-scientist, or even if you are a scientist but have not studied the topic at-hand (like, gee, me talking about near-death experiences while I’m an astrophysicist/geophysicist), then you need to make darn sure that you know what the heck you’re talking about. And you need to be humble enough to, when the actual person who’s studied this says you’ve made a mistake, take that very seriously and look again at what you thought was going on. The probability that you have made a mistake or misunderstood something as opposed to the expert in the field is fairly high.

Again, this is not my attempt to backtrack and myself commit an argument from authority fallacy. However, there is a difference from making an argument from authority fallaciously versus listening to what an authority on the subject says and taking it into account and re-examining your conclusions. It seriously amazes me how much Alex argued against Steve as if Alex were an expert in neurology. It caused him to simply miss many of the points and arguments Steve was making, as evidenced by Steve saying something and then needing to repeat his argument 20 minutes later because Alex had ignored it because Alex has been buoyed by his interviews with previous pro-duality guests.

Final Thoughts

As I’ve stated, the purpose of this post is not to discuss whether NDEs show a mind-brain duality or if it has a purely materialistic explanation. The purpose is to point out that the methods Alex uses are fallacious, and while I know that people have pointed it out to him before, it seems that it has made very little impact upon the way he argues. I believe this is in part due to his need for confirmation bias – he definitely has made his mind up on whether or not psi-type phenomena exists. But I also am fairly sure that it’s because Alex lacks any kind of formal training in science. Because of that, he makes these kinds of mistakes – at least originally – without knowing any better. Now, since it’s been pointed out to him, I think it’s intellectually dishonest to keep making them, but again that’s beyond the purpose of this post.

So, to wrap this all up, non-scientists take heed! Avoid making these kinds of mistakes when you try to do or to understand science yourself. Make sure that you look at the data, not just the conclusions from a paper. Don’t make arguments from authority. Remember that popular books are not the same as peer-reviewed literature. And keep in mind there can be (a) multiple explanations and (b) multiple ways to reach an end point.

January 22, 2009

The Purpose of Peer-Review in Science


Introduction

Many people outside of mainstream science – such a conspiracy theorists, psi researchers, UFOlogists, and others – seem to have a beef with the process of peer-review. And, some mainstream scientists do, too. The purpose of this post is really to address why we have peer-review, why it’s important, why science really does need it in order to meet its goals, and to be fair, address some of its weaknesses.

Why Am I Addressing This

It really never registered much to me that fringe researchers would knock the peer-review system. It kinda went in one ear and out the other. Then, I was listening to the December 22, 2008, episode of the podcast “Skeptiko” with Alex Tsakiris where he spends several minutes complaining about how mainstream scientists “do” science. One of his big complaints and something that he called “stupid” (that’s a quote) was the embargo on releasing early results. He thinks that results should be released as they come in.

I made the following observations on an online forum:

Alex really seems to have no grasp of how science is actually done. At about 20 minutes into his last podcast, he states, “I want to break the traditional science rule about not talking about results until they’re published because, well, first of all, I think it’s a stupid rule [and he laughs] …” Results usually aren’t announced early for several reasons, not the least of which is that it hasn’t passed any peer review yet.

For example, I could do some ground-breaking research for a year and get this great result and then talk about it, or I could pass it by peers first only to have them discover that I’ve not accounted for some small factor that will dramatically reduce the significance.

Another reason is that preliminary results are just that – preliminary. One of my research projects at the moment is to generate a complete global database of Mars craters to ~1.5 km in diameter. I’ve done that now for about 30% of the planet. I could go ahead and release results and get more papers out of it, or I could wait until the whole thing is finished and I have all the statistics in place to back up my conclusions. This is especially necessary because Mars is not all the same, and craters from different regions have different properties, so me releasing early results that make broad conclusions could easily turn out to be fallacious once the entire project is done.

And as usual, Alex just seems to not get it. His results are going the way he thinks they should, so he’s releasing them early and claiming at least a cautious victory “so far.” This is also partly why I’m not a giant fan of his “open source science” — you really DO need training in science before you can do it “properly” — learning to take into account all these things that you may not otherwise, normally, think of.

That was really about releasing results early, and a little about peer-review.

Then, I was listening to the January 15, 2009, episode of Coast-to-Coast AM with Richard Hoagland. Among other things, he made the following statement: “You follow your curiosity, which is what science is supposed to be. It’s not supposed to be a club or a union or a pressure group that doesn’t want to get too far out of the box ’cause of what the other guys will think about you. … This concept of ‘peer review’ … is the thing which is killing science.”

It was with that line that I decided I should write this post.

Why We Have Peer-Review

Peer-review is important. The whole point of peer-review is so that your findings – your data and conclusions – are subjected to the review of your peers.

To use a reductio ad absurdum logic, if we didn’t have this process, then what anyone says is basically dogma with no chance of rebuttal. For example, if there weren’t a process of peer-review, I can say 2+2=1. You may say 2+2=5. And someone else may say 2+2=4. How would anyone know which is correct? The obvious answer in this contrived example is that everyone knows that 2+2=4. But how? Because you ask someone else, and they tell you? That’s peer-review.

In science, the purpose of peer-review is really just that. Your peers (other people who study what you study) look at your findings and make sure that in their opinion, you have followed the proper data gathering methods (so you took 2 apples and 2 oranges and laid them down as opposed to meditated and asked your spirit guide) and you reached the conclusions that are appropriate for the data you gathered (you then count all the pieces of fruit and come up with 4 instead of your spirit guide saying that 2+2 is really 7).

The purpose of peer-review is really nothing more than that, and it is nothing less than that.

Why Science Needs Peer-Review

It is often said that science is “self-correcting” over time. What this means is that if science has led to erroneous conclusions that did pass through the peers at the time, that ultimately the errors will be worked out because the process and data-collecting are repeated over and over again by others. A good example of this is gravity. Newton developed his Theory of Gravity. It was used for centuries. Repeated experiments showed it to be accurate.

But, some of them didn’t. Some showed slight deviations (like Mercury’s orbit). Then, another researcher came along (Einstein) and showed that Newton’s theory needed to be modified in order to account for ginormous masses and accelerations. Without the process of people reviewing predictions and measurements relevant to gravity, then we would not know that Newton didn’t have the whole picture. And even today, a century later, people are still testing Einstein’s theories, making more and more measurements to test them, subjecting them to the process of peer-review.

Hoagland’s Claims

I am not saying that these are representative of the general fringe community’s problems with peer-review, simply that they are what I have observed to be the general complaint. It’s fairly well-said by Richard Hoagland, this quote continuing from the one I ended with above:

“It’s not the peer review so much as the invisible, anonymous, peer-review. Basically, before a paper can get published, … you know you have to go through so many hurdles, and there’s so many chances for guys who have it ‘in for you,’ who don’t like you, or who don’t like the idea you’re trying to propose in a scientific publication, can basically … stick you in the back … and you never know!

“One of the tenants of the US Constitution … is that you have the right to confront your accuser. In the peer-review system, which has now been set up for science, … the scientist – which [sic] is basically on trial for an idea – because that’s what it is, by any other name it’s really a trial, is-is attacked by invisible accusers called ‘referees,’ who get a chance to shaft the idea, kill the idea, nix the paper, tell the editor of whatever journal, ‘Oh, this guy’s a total wacko …’ and you never have the opportunity to confront your accuser and demand that he be specific as to what he or she has found wrong with your idea.”

My Response to Hoagland

I don’t know what journals he’s talking about, but for all the ones I know of, his claims are wrong. Just as with the US court system, you have appeals in journals. If the first reviewer does not think your paper should get in, then you can ask the editor to get another opinion. You’re never sunk just because one reviewer doesn’t like you and/or your ideas.

As to the anonymity, while I personally don’t like it, it’s necessary. Without a referee having the ability to remain anonymous, they cannot always offer a candid opinion. They may be afraid of reprisals if they find errors (after all, grants are also awarded by peer-review). They may also not want to hurt someone’s feelings (as teenagers today are finding, it’s much easier to break up via Facebook or a txt message than in person — it’s the same with anonymity in peer-review). They may have their own work on the subject they think you should cite but don’t want to appear narcissistic in recommending it. In short, there are many very good reasons to remain anonymous to the author(s).

However, they are not anonymous to the editor or the editorial staff. If there are problems with a reviewer consistently shooting down ideas that they have an otherwise vested interest in, then the editors will see that and they will remove the reviewer.

I also want to point out something my officemate is fond of saying: “Science is not a democracy, it’s a meritocracy.” Not every idea deserves equal footing. If I come up with a new idea that explains the universe as being created by a giant potato with its all-seeing eyes (Dinosaurs fans, anyone?) then my new idea that I just made up should not deserve equal footing with the ones that are backed up by centuries of separate, independent evidence. The latter has earned its place, the former has not.

That is something that most fringe researchers seem to fail to grasp: Until they have indisputable evidence for their own ideas that cannot be otherwise easily explained by the current paradigm, then they should not necessarily be granted equal footing. Hoagland’s pareidollia of faces on Mars does not deserve an equal place next to descriptions of the martian atmosphere.

The Cons of Peer-Review

There are bad points to peer-review, though they really are only when there is an abuse of it. There is a faculty member back in my undergraduate institution who likes to tell the story of a young astronomer who submitted a paper about the value of the Hubble Constant (a measure of how rapidly the universe is expanding). The paper was sent to a reviewer who had his own ideas, and the young astronomer’s were not the same as his. So, he sat on the paper. He wrote a rebuttal to it. And he had the rebuttal published before he got to her paper.

That is an abuse of the system. I think that every scientist would admit that, and we strive to not be “that person.” After all, “that person” is now fairly blacklisted from polite astronomy society, and, as I’ve just done, people talk behind his back about him and how crummy he was.

In the vast sea of peer-review, however, there are just a few drops of “those people.” Most reviewer comments are helpful. They usually think of things you didn’t, and they only serve to make your results stronger.

Final Thoughts

The process of peer-review in science is an old one and one that is important to the essence of what science is and what it is supposed to do. If someone continuously complains about it, then the first thing you should do is to ask yourself what the motivation may be behind their ideas. Is it because they happened to get burned by one reviewer? Or is it perhaps because their ideas really don’t pass any scientific muster, they don’t fit with every other observation, and they require an extraordinary new premise to be true without sufficient evidence to back it up?

Create a free website or blog at WordPress.com.