Exposing PseudoAstronomy

March 9, 2018

Even Science Reporters Are Circumventing Scientific Process


I study impact craters (those circle thingies on other planets, moons, asteroids, comets, etc.). A colleague recently pointed out a manuscript to me that demonstrated a new method to do something with craters. (I’m being purposely vague here to protect the situation.) It was an interesting manuscript, but it was submitted to an open archive (arxiv.org) where anyone can submit pretty much anything that seems sciencey. It has not been through the peer-review process.

Peer-review is not perfect. I’ve written about it before on this blog and discussed it on my podcast. But the purpose of peer-review is to weed out stuff that is obviously wrong. Things that may seem good to a general researcher, but to someone else who really knows the field, it clearly has issues. Other purposes of peer-review are to make sure the work is placed in proper context (usually by citing the reviewers’ works, but that’s a separate issue), making sure that the authors of the manuscript have explained themselves well, that their methods make sense, that they have explored alternative interpretations of their data, etc. In other words, do science “right.” Where “right” is in quotes because there is no formal set of rules by which one must play, but there are general guidelines and important pillars which people should uphold.

After it passes peer-review – if it passes peer-review – then it may be accepted by a journal and published. Some stuff that gets through peer-review is great. Some stuff is utter crap because the process isn’t perfect and because we don’t know everything, and the prevailing scientific opinion can shift with new information.

That is upended in today’s cut-throat world of journalism and a desire to be the first to publish about something that seems new and interesting.

I was contacted yesterday by a freelance reporter for the publication New Scientist. I’m not going to say the reporter’s name, but I have no qualms stating the publication. The reporter, coincidentally, wanted me to comment on the manuscript that had been submitted to arxiv.org. I refused. Here is what I wrote:

Thank you for writing. I am generally happy to comment about crater papers, and I would be happy to comment on this manuscript should it be accepted by the peer-review process. My concern at the moment is that the manuscript is only on an open server to which anyone can submit and it has not been vetted by researchers in the field beyond the authors themselves. The authors also used [specifics redacted] which have some significant omissions, and how that affects their results needs to be assessed by people who know all the ins and outs of their methods, which is not me.

I strongly recommend that you refrain from publishing about this work until it has made it through the peer-review process. It is easy to get excited about new techniques, but at the moment, it has not been vetted by other experts in the field, such that I think writing about it now is premature.

The reporter responded that I had a valid concern, he appreciated my advice, and he would discuss it with his editor.

Then just a few minutes ago, I heard from another friend in the field that she had been asked to comment for the story. She is taking a similar approach, which I greatly appreciate.

But this identifies, to me, a significant problem that those in both the scientific community and skeptic community have pointed out for years: Journalists don’t seem to care about vetting the science about which they write. Now, this could be an isolated example of an over-zealous reporter given the “OK” by their editor. Except it’s not. Too often we see articles about work just at the very edge of the field that offers great marvels and promises, only to hear nothing more from it because it was all based on extraordinarily preliminary efforts. Craters aren’t going to affect your daily life. But the issue here is a symptom of a greater problem. And I think that only if scientists and the reading public demand that reporters stop doing this will we see any sort of change.

Advertisement

November 21, 2013

Podcast Episode 93: The Importance, Methods, and Faults of Peer Review


How work is reviewed
Within the fields of science …
Vers’s pseudoscience.

This one’s an unconventional episode where I talk about one of the most basic ideas and processes in science: That of Peer Review.

As I gear up to do an episode every few days in prep for my trip to Australia, Dec. 16 – Jan. 21, more of these different kinds of episodes will be coming up – and episodes with just the main segment and not other ones like Q&A and Puzzler – and I’m also planning/conducting a lot of interviews to make putting out episodes over those ~5 weeks easier on me. There’s also still the reminder to let me know (if you haven’t yet) if you’re interested in participating in the 100th episode spectacular. To do so, you should have a decent microphone and be able to ad lib and come up with crazy ideas.

February 13, 2013

The Peer-Review of Bigfoot


Introduction

Today, after a very long-awaited process, forensic DNA analyst Melba Ketchum released the results of her work that allegedly proves Bigfoot exists, being a species roughly 15,000 years old, and having resulted from the interbreeding of a human with an unknown primate at that time.

There are numerous people talking about this in the skeptical underworld … I recommend the Doubtful News story, JREF forum thread, and/or MonsterTalk Facebook page.

Clearly from the title of this blog, I am not a biologist, forensic anthropologist, geneticist, nor any other thing related. And people on those threads I just linked to are covering details of this such that much of anything I say would just be redundant. However, I have talked about the peer-review process on this blog before (mainly here, but also here, here, here, here, and here). And Melba Ketchum’s “publication” of her results is another good example to illustrate the purpose of peer-review and point out the fact that all because someone publishes something in a “science journal,” it does not mean it’s good science.

Edited to Add (2/14/2013): Some zoologists who have read the paper have chimed in, indicating that this paper is not of good quality nor up to general academic standards.

The Requisite Background

To make a long story short that you can read in much more detail at any of those first three links, Dr. Melba Ketchum received several samples of biological material (hairs mostly, I think), several years ago. After alleged detailed DNA analysis, they proved to themselves it was Bigfoot material. They wrote up a paper for a scientific journal – which is what you’re supposed to do in mainstream science – and submitted it for peer review (the process where people who do similar types of work look over the paper and try to figure out if there are problems with it).

As the story goes in this drastically shortened narrative, this was all under wraps until November of 2012 when it was leaked out by some overseas colleague (I want to say Russian? but I don’t entirely remember). This forced Dr. Ketchum to go somewhat public with it and face intense media scrutiny.

I listened to her for a full Coast to Coast AM show back on December 23, 2012, where she was on the defensive and offensive. In listening to her, I actually felt sorry for her and decided to reserve judgement to see what would happen if her results were actually published.

And that’s what happened today.

Publication

The problem is, it’s not in a typical peer-reviewed journal. It’s not even in a science journal that has any track record. She published in the “DeNovo Scientific Journal. Sounds okay at first …

… except that the domain was purchased anonymously 9 days ago for a period of one year. And this is the only paper that the journal has put out. And in fact, they admit that when other journals would not publish their results, they went out, bought a journal, renamed it, and published their paper.

That is not peer-review. This is like a case where your spouse won’t do something you want them to do so you go and build a robot spouse that you program yourself to do that something.

As I said, I felt sorry for her and I was willing to give Melba the benefit of the doubt. This, however, removes all pretense of an attempt at having people look at this work and judge it objectively and go back and fix mistakes that were pointed out.

She also apparently does not understand the concept of “open access” (meaning free) because it costs $30 to view the paper.

Other Signs

There are many other signs of a lack of any validity here. One is that, earlier today, the journal’s website was using stock photos from websites without any of the required attribution. Those photos are gone now, a few hours later, but other stock photos are present still without attribution (though maybe these were paid for, but most licenses still require posting attribution).

Another is that on the Contact Us page, the name “Robin Haynes” appeared earlier today, but it’s missing now (but visible at the moment in Google’s cached version). There is fairly good evidence that this is the renamed Robin Lynn Pheifer, who has gone by a few different names, and is a woman in Michigan who claimed to have 10 bigfoots on her 10-acre property to whom she would repeatedly feed blueberry bagels.

Another is that people have started to contact the co-authors to see if they actually participated in the paper. Of the two who have responded, one said that he did no analysis nor writing of the paper (though was aware of it), while another hasn’t seen any recent version and could not extract any DNA from the samples he had tested.

Final Thoughts

I’m sure this is going to continue to get very detailed scrutiny over the next several days. The problem is that at this point, almost regardless of what is determined, this move to create one’s own journal and call it peer-reviewed (and scientific — after all, “Scientific” is in the title!) is a gross violation of the terms and process. It’s worse than Answers in Genesis having their own “Creation” journal because at least they are clear about what it really is. And it uses stock images with proper attribution.

Peer-review is not a perfect process. But it’s the best we have. Invoking the Galileo complex (which she did) and then making your own publication only serves to further polarize people: Detractors will use this as fodder to point out that you’ve got nothin’, and people who already supported you already think there’s a vast conspiracy to keep them down.

September 1, 2011

Logical Fallacies: Argument from Authority versus the Scientific Consensus


Introduction

I haven’t done a post in almost two years to add to my very incomplete series on logical fallacies and fallacious argument techniques. However, due to recent posts – especially in the comments section – on my blog, I thought this would be a good time to re-visit the specific and very common logical fallacy of the “argument from authority,” and I want to then contrast that against the “scientific consensus.” They are not the same thing.

In actuality, I have addressed this difference before, albeit it was in the very early days of my blog and I want to pull out more specific examples and be more explicit this time.

The Argument from Authority

The argument from authority is really a very simple logical fallacy to spot: Person A has seeming authority in some subject, therefore Person B needs to believe what they say.

An example from the Apollo Moon Hoax lexicon is that David Groves, Ph.D. (the authority) showed in a study that the radiation experienced by astronauts would have rendered their photographic film damaged beyond repair (exposed) so they could not have possibly taken the pictures that NASA claims. He has a Ph.D., therefore he’s right. Except, not. His study did not use the same camera, film, nor shielding that NASA did. He exposed the film to 1000 times the strength of radiation for 100x as long (effectively). Not exactly a valid experiment to demonstrate what is claimed.

Another example, courtesy of Answers in Genesis, is that they have a Ph.D. astrophysicist on staff, “Dr. Jason Lisle, Ph.D.” Yes, his Ph.D. is valid, was in the actual science field, and he graduated a year before I entered grad school from the same department I got my degree in. Does that make his creationist writings any more valid than a guy ranting on the street? No. Does that make his claims that the fact we can argue with logic means the Bible is true (yes, he does claim that)? No.

Or, to use a contrived example from my first post on this subject, I could make the claim that Dr. Crusher is an expert on human anatomy. The fallacy then goes that if I were to say to my friends that Dr. Crusher says the neck bone is connected to the foot bone, then it must be true because she’s an expert in that field. But, obviously this is not true. In other words, the validity of the claim does not follow from the credibility of the source.

Other Examples of Argument from Authority

Isaac Newton: One of my favorite examples of the argument from authority is that of Isaac Newton. By pretty much any account and all measures, Newton was the founder of modern physics and mathematics. He didn’t just codify calculus, gravity, and motion, but also optics. He truly is one of the most important people and most authoritative people in modern science. If anyone is an authority, he is.

But then, Newton was a fervent believer in alchemy. He thought that you could turn ordinary, cheap metals (for example) into more valuable ones like gold if you combined them with the right chemicals. He pursued this as much as he pursued figuring out why we have tides.

If Newton were alive today, I would likely believe anything he said about physics (at least classical physics). But alchemy? No. I’d call him out on that pseudoscience just as much as I call out Terry Nazon on her made-up astrology. It doesn’t matter if he is revered and respected — individual arguments from authority are a logical fallacy for a reason, and citing an individual who claims one thing that does not make sense given what we know about the universe is as bad an argument as “’cause I said so, that’s why!”

Dr. Richard B. Hoover, Ph.D. from NASA: First reported widely on FOX news in early March, 2011, Dr. Richard B. Hoover, “an astrobiologist with NASA’s Marshall Space Flight Center,” found life on a meteorite. He published his findings in the “peer-reviewed” Journal of Cosmology. This was very quickly torn apart by most scientists in the field and in related fields where we (yes, I participated) pointed out that he was seeing pareidolia shapes in rocks, his findings were not verified nor replicable by his peers in the field, and that the Journal of Cosmology is one of the crackpot “journals” in astronomy.

JoC is a fringe journal at best. To quote PZ Myers, “it isn’t a real science journal at all, but is the ginned-up website of a small group of crank academics obsessed with the idea of Hoyle and Wickramasinghe that life originated in outer space and simply rained down on Earth.” In response to Hoover’s paper, it contacted the editors of Science and Nature to put together a panel of experts to evaluate the claims. Then it stated, “any refusal to cooperate, no matter what the excuse [will be] vindication for the Journal of Cosmology and the Hoover paper, and an acknowledgment that the editorial policies of the Journal of Cosmology are beyond reproach.” With that, they clearly cross into the tactics used by many pseudoscientists whereby either (a) they wear out the critics to the point the critics just don’t care anymore, or (b) the critics never cared enough in the first place to dignify the original challenge because it was so fringe to begin with.

With that said, the JoC’s editorial board is made of seven Ph.D.s, one who is the director of the center for astrobiology at Cardiff, one from NASA JPL, one who is the senior research scientist in the science directorate at NASA Langley, and another who is the head of the department of computer science at Oklahoma State University. Seems “highly qualified.” But, this is another example of a few who put together a journal being an argument from authority. I actually looked up one of the Ph.D.s because he is in my former department here at CU-Boulder. Looking further into him, there’s really nothing to find other than he’s emeritus faculty — basically retired but still hangs around. His personal website was last updated in 2001.

So we have another case where all because someone is a NASA scientist, all because someone is a department chair, all because someone is a center director, it does not mean that all of their claims can be taken as true.

Similarly, if you can convince a NASA scientist, an imaging professional, someone at the CDC, someone who runs the computers for a major NASA mission, or someone who builds spacecraft that your particular claim is true, that does not mean that everyone else needs to believe it.

My 8th Grade Science Teacher: We started out 8th grade science with going around the room and saying what our parents did for a living. The teacher then told us that he used to work in the local hospital. For some reason, that seemed to convey some authority at the time. In hindsight, I think he was trying to make himself feel good.

That authority quickly vanished during our astronomy unit when he explained to us that the moon was three times farther away from Earth than the sun, a kilometer is longer than a mile, and that to stop a space ship in space you shut off the engines and wait for it to wander near a planet and have the planet’s gravity slow you down. After some checking, his job at the local hospital turned out to be in security. Obviously, this was a case where a stated authority (working at a hospital) and a presumed authority (being the teacher) could not mask gross incompetence.

Scientific Consensus: NOT An Argument from Authority

In contrast, the scientific consensus is not an argument from authority. There are a couple of ways to think about this. The most basic and concise is that the scientific consensus is not based on an individual’s or small group’s credibility.

A more lengthy way to think about this is that the scientific community is convinced by evidence, not by individual charisma nor authority. I’ve said it many, many times before in this blog, and I’ve written at least a whole post on it, that contrary to seemingly popular opinion, scientists want to create new paradigms. They want to be able to convince their colleagues and detractors that they are correct. Upholding the status quo means you are guaranteed to be forgotten. And, the only way you are going to convince everyone that you are correct is to provide them with overwhelmingly convincing evidence and to show that your new model/idea explains all of the evidence that the previous one did at least as well, if not better.

Once this is done, the people who are experts in the field will be convinced. They can then go out and convince others in related fields that this is the actual way things work. Again — it’s not an authority, they are convincing people by the evidence. This process continues to trickle throughout the scientific community until there is a broad consensus on that issue.

By that point, what is a lay person to do? Should they trust Dr. Linus Pauling, a twice Nobel Laureate who claimed that high doses of Vitamin C basically prevented almost all illnesses and cured many diseases, including cancer? Or should they trust the scientific consensus – a group of tens of thousands of medical professionals who have read and been convinced by the research – that Pauling was deluded?

I’m not saying that you should trust the consensus view blindly. Try to understand it. Understand why the consensus is what it is. What is the evidence that has convinced everyone? At that point, if you still think they may be wrong, then figure out why the consensus view is not convinced by the evidence that you are. It is highly likely that you are misunderstanding something, not thousands of people who have spent their lives studying the issue.

The Scientific Consensus is Not Infallible

That all being said, scientists will usually be the first (as in, not the last) to admit that the consensus is fallible and that their views can be changed by the evidence. That is how new paradigms happen. Plate tectonic theory was laughed at for about two decades before overwhelming evidence for it was presented that changed the entire consensus opinion within just a few years. The same was true with the death of the dinosaurs — there were many different hypotheses out there but when the iridium layer was found at the K/T boundary and the crater was finally discovered off the Yucatan peninsula, the scientific consensus changed very rapidly in light of the evidence.

Certain scientific paradigms/consensuses (according to spell-check, that is the plural of “consensus” even though it sounds wrong, but who am I to argue with spell-check?) that we hold now could very likely change in the future. What is unlikely, though, is for them to change to something for which there is currently very convincing evidence that it is not the case. An example of this would be astrology – there is absolutely no mechanism for it to work, and all statistically robust studies show that it fails to produce results better than chance.

Final Thoughts

In the end, the argument from authority is quite an easy logical fallacy to spot. Differentiating it from the scientific consensus is not as easy, and understanding the difference between the fallacious argument from authority and the non-fallacious scientific consensus is even harder. Steve Novella has a post on this topic from about a year ago, and I recommend reading it if you’re still a bit confused about the difference.

What should also be re-emphasized is that you should never take anything on blind faith/authority. If you hear an argument from authority, investigate the claim. If you hear a scientific consensus that you disagree with, first understand the evidence that convinced the scientists, and then figure out why you disagree. If you think you have solid evidence to the contrary, it has not been shown to be wrong, and your model can explain all of the data that the currently accepted model does at least as well, then present it and try to convince them. But also be humble enough to realize that the evidence that convinces you, when it may be pointed out by people within that scientific community that it’s wrong, actually is probably wrong. At the very least, you should admit that people disagree with you and find faults because of [insert reason].

That’s what scientists do, too.

May 29, 2010

Skeptiko Host Alex Tsakiris: On the Non-Scientifically Trained Trying to Do/Understand Science


Preamble

First, let me give one announcement for folks who may read this blog regularly (hi Karl!). This may be my last post for about a month or so. As you may remember from my last post, I will be teaching all next month, June 1 through July 2, and the class is every day for 95 minutes. I have no idea how much free time I may have to do a blog post, and I have some other projects I need to finish up before the end of the month (I’m also a photographer and I had a bride finally get back to me about photos she wants finished).

Introduction

I have posted once before about Skeptiko podcast host Alex Tsakiris in my post about The Importance of Peer-Review in Science. The purpose of that post was to primarily show that peer review is an important part of the scientific process, a claim contrary to what the host of said podcast had claimed.

Now for the official disclaimer on this post: I do not know if Alex is a trained scientist. Based on what he has stated on his podcast, my conclusion is that he is not. What I have read of his background (something like “successful software entrepreneur” or around those lines) supports that conclusion. However, I don’t want to be called out for libel just in case and so that is my disclaimer.

Also, I am not using this post to say whether I think near-death experiences are a materialistic phenomenon or point to a mind-brain duality (mind/consciousness can exist separately from brain). That is NOT the point of this post and I am unqualified to speak with any authority on the subject (something I think Alex needs to admit more often).

Anyway, I just completed listening to the rather long Skeptiko episode #105 on near-death experiences with Skeptics’ Guide to the Universe host Steven Novella Dr. Steven Novella (see Points 2 and 3 below for that “Dr.” point). I want to use that episode to make a few points about how science is done that an (apparently) non-scientifically-trained person will miss. This post is not meant to be a dig/diss against so-called “citizen science,” rather the pitfalls of which non-scientists should be aware when trying to investigate pretty much ANY kind of science.

Point 1: Conclusions Are Not Data

Many times during the episode’s main interview and after the interview in the “follow-up,” Alex would talk about a paper’s conclusions. “The researchers said …” was a frequent refrain, or “In the paper’s conclusions …” or even “The conclusions in the Abstract …” I may be remembering incorrectly, perhaps someone may point that out, but I do not recall any case where Alex instead stated, “The data in this paper objectively show [this], therefore we can conclude [that].”

This is a subtle difference. Those of you who may not be scientifically trained (or listened to Steve’s interview on the episode) may not notice that there is an important (though subtle) difference there. The difference is that the data are what scientists use to make their conclusion. A conclusion may be wrong. It may be right. It may be partially wrong and partially right (as shown later on with more studies … more data). Hopefully, if there was not academic fraud, intellectual dishonesty, nor faulty workmanship (data gathering methods), the actual data itself will NEVER be wrong, just the conclusions from it. In almost any paper — at least in the fields with which I am familiar — the quick one-line conclusions may be what people take away and remember, but it’s the actual data that will outlive that paper and that other researchers will look at when trying to replicate, use in a graduate classroom, or argue against.

I will provide two examples here, both from my own research. The first is from a paper that I just submitted on using small, 10s to 100s meter-sized craters on Mars to determine the chronology of the last episodes of volcanism on the planet. In doing the work, there were only one or two people who had studied it previously, and so they were obviously talked about in my own paper. Many times I reached the same conclusion as they in terms of ages of some of the volcanos, but several times I did not. In those cases, I went back to their data to try to figure out where/why we disagreed. It wasn’t enough just to say, “I got an age of x, she got an age of y, we disagree.” I had to look through and figure out why, and whether we had the same data results and if so why our interpretations differed, or if our actual data differed.

The second example that’s a little better than the first is with a paper I wrote back in 2008 and was finally published in a special edition of the journal Icarus in April 2010 (one of the two main planetary science journals). The paper was on simulations I did of Saturn’s rings in an attempt to determine the minimum mass of the rings (which is not known). My conclusion is that the minimum mass is about 2x the mass inferred from the old Voyager data. That conclusion is what will be used in classrooms, I have already seen used in other peoples’ presentations, and what I say at conferences. However, people who do research on the rings have my paper open to the data sections, and I emphasize the “s” because in the paper, the data sections (plural) span about 1/2 the paper, the methods section spans about 1/3, and the conclusions are closer to 1/6. When I was doing the simulations, I worked from the data sections of previous papers. It’s the data that matters when looking at these things, NOT an individual (set of) author(s).

Finally for this point, I will acknowledge that Alex often repeats something along the lines of, “I just want to go where the data takes us.” However, saying that and then reading a paper’s conclusions are not mutually compatible. Steve pointed that out at least twice during the interview. At one point in the middle, he exclaimed (paraphrasing), “Alex, I don’t care what the authors conclude in that study! I’m looking at their data and I don’t think the data supports their conclusions.”

Point 2: Argument from Authority Is Not Scientific Consensus

In my series that I got about half-way through at the end of last year on logical fallacies, I specifically avoided doing Argument from Authority because I needed to spend more time on it versus the Scientific Consensus. I still intend to do a post on that, but until then, this is the basic run-down: Argument from Authority is the logical fallacy whereby someone effectively states, “Dr. [so-and-so], who has a Ph.D. in this and is well-credentialed and knows what they’re doing, says [this], therefore it’s true/real.”

If any of my readers have listened to Skeptiko, you are very likely familiar with this argument … Alex uses it in practically EVERY episode MULTIPLE times. He will often present someone’s argument as being from a “well-credentialed scientist” or from someone who “knows what they’re doing.” This bugs the — well, this is a PG blog so I’ll just say it bugs me to no end. ALL BECAUSE SOMEONE HAS A PH.D. DOES NOT MEAN THEY KNOW WHAT THEY’RE DOING. ALL BECAUSE SOMEONE HAS DONE RESEARCH AND/OR PUBLISHED DATA DOES NOT MEAN THEIR CONCLUSIONS ARE CORRECT OR THAT THEY GATHERED THEIR DATA CORRECTLY.

Okay, sorry for going all CAPS on you, but that really cannot be said enough. And Alex seems to simply, plainly, and obviously not understand that. It is clear if you listen to practically any episode of his podcast, especially during any of the “psychic dogs” episodes or “global consciousness” ones. It was also used several times in #105, including one where he explicitly stated that a person was well-credentialed and therefore knows what they’re doing.

Now, very briefly, a single argument from someone does not a scientific consensus make. I think that’s an obvious point, and Steve made it several times during the interview that there is no consensus on the issue and individual arguments from authority are just that — arguments from authority and you need to look at their data and methods before deciding for yourself whether you objectively agree with their conclusions.

Edited to Add: I have since written a lengthy post on the argument from authority versus scientific consensus that I highly recommend people read.

Point 3: Going to Amazon, Searching for Books, to Find Interview Guests

Okay, I’ll admit this has little to do with the scientific process on its face, but it illustrates two points. First, that Alex doesn’t seem to understand the purpose/point of scientific literature, and second that fast-tracking the literature and doing science by popular press is one of the worst ways and a way that strikes many “real” scientists as very disingenuous. I’ll explain …

First, I will again reference my post, “The Importance of Peer-Review in Science.” Fairly self-explanatory on the title, and I will now assume that you’re familiar with its arguments. In fact, I just re-read it (and I have since had my own issues fighting with a reviewer on a paper before the journal editor finally just said “enough” and took my side).

To set the stage, Alex claims in the episode:

“Again, my methodology, just so you don’t think I’m stacking the deck, is really simple. I just go to Amazon and I search for anesthesia books and I just start emailing folks until one of them responds.”

As I explained, peer-reviewed papers are picked apart by people who study the same thing as you do and are familiar with other work in the area. A book is not. A book is read by the publishing company’s editor(s) – unless it’s self-published in which case it’s not even read by someone else – and then it’s printed. There is generally absolutely zero peer-review for books, and so Alex going to Amazon.com to find someone who’s “written” on the subject of near-death experiences will not get an accurate sampling. It will get a sampling of people who believe that near-death experiences show mind-brain duality because …

Published books on a fringe “science” topic are done by the people who generally have been wholeheartedly rejected by the scientific community for their methods, their data-gathering techniques, and/or their conclusions not being supported by the data. But they continue to believe (yes, I use the word “believe” here for a reason) that their interpretations/methods/etc. are correct and hence instead of learning from the peer-review process and tightening their methods, trying to bring in other results, and looking at their data in light of everything else that’s been done, they publish a book that simply bypasses the last few steps of the scientific process.

Not to bring in politics, but from a strictly objective point, this is what George W. Bush did with the US’s “missile defense” system. Test after test failed and showed it didn’t work. Rather than going back and trying to fix things and test again, he just decided to build the thing and stop testing.

Point 4: Confusing a Class of Outcomes with a Single Cause

This was more my interpretation of what Alex did in the interview and what Steve pointed out at many times, and it is less generalizable to the scientific process, but it does apply nonetheless.

Say, in cooking, you serve up a pizza. The pizza is the “class of experiences” here that is the same as a class of things that make up the near-death experience (NDE). The toppings of your pizza are the individual experiences of the NDE. Pizzas will usually have cheese, NDEs will usually have a sense of well-being. Pizzas may more rarely have onions, NDEs may more rarely have a white light tunnel associated with them. You get the idea.

Now, from the impression I got, Alex seemed to claim throughout the episode that there was only one way to make a pizza — have an NDE. Steve argued that there were many different ways to make a pizza, and that all those different techniques will in general lead to something that looks like a pizza.

Point 5: Steve’s a Neurologist, Alex Is Not

I need to say before I explain this point that I am NOT trying to say that you need a Ph.D. in the topic to do real science. I do not in ANY WAY mean to imply that science is an elitist thing where only people “in the club” can participate.

That said, I really am amazed by Alex arguing against people who actually have studied the subject for decades. If you are a non-scientist, or even if you are a scientist but have not studied the topic at-hand (like, gee, me talking about near-death experiences while I’m an astrophysicist/geophysicist), then you need to make darn sure that you know what the heck you’re talking about. And you need to be humble enough to, when the actual person who’s studied this says you’ve made a mistake, take that very seriously and look again at what you thought was going on. The probability that you have made a mistake or misunderstood something as opposed to the expert in the field is fairly high.

Again, this is not my attempt to backtrack and myself commit an argument from authority fallacy. However, there is a difference from making an argument from authority fallaciously versus listening to what an authority on the subject says and taking it into account and re-examining your conclusions. It seriously amazes me how much Alex argued against Steve as if Alex were an expert in neurology. It caused him to simply miss many of the points and arguments Steve was making, as evidenced by Steve saying something and then needing to repeat his argument 20 minutes later because Alex had ignored it because Alex has been buoyed by his interviews with previous pro-duality guests.

Final Thoughts

As I’ve stated, the purpose of this post is not to discuss whether NDEs show a mind-brain duality or if it has a purely materialistic explanation. The purpose is to point out that the methods Alex uses are fallacious, and while I know that people have pointed it out to him before, it seems that it has made very little impact upon the way he argues. I believe this is in part due to his need for confirmation bias – he definitely has made his mind up on whether or not psi-type phenomena exists. But I also am fairly sure that it’s because Alex lacks any kind of formal training in science. Because of that, he makes these kinds of mistakes – at least originally – without knowing any better. Now, since it’s been pointed out to him, I think it’s intellectually dishonest to keep making them, but again that’s beyond the purpose of this post.

So, to wrap this all up, non-scientists take heed! Avoid making these kinds of mistakes when you try to do or to understand science yourself. Make sure that you look at the data, not just the conclusions from a paper. Don’t make arguments from authority. Remember that popular books are not the same as peer-reviewed literature. And keep in mind there can be (a) multiple explanations and (b) multiple ways to reach an end point.

January 22, 2009

The Purpose of Peer-Review in Science


Introduction

Many people outside of mainstream science – such a conspiracy theorists, psi researchers, UFOlogists, and others – seem to have a beef with the process of peer-review. And, some mainstream scientists do, too. The purpose of this post is really to address why we have peer-review, why it’s important, why science really does need it in order to meet its goals, and to be fair, address some of its weaknesses.

Why Am I Addressing This

It really never registered much to me that fringe researchers would knock the peer-review system. It kinda went in one ear and out the other. Then, I was listening to the December 22, 2008, episode of the podcast “Skeptiko” with Alex Tsakiris where he spends several minutes complaining about how mainstream scientists “do” science. One of his big complaints and something that he called “stupid” (that’s a quote) was the embargo on releasing early results. He thinks that results should be released as they come in.

I made the following observations on an online forum:

Alex really seems to have no grasp of how science is actually done. At about 20 minutes into his last podcast, he states, “I want to break the traditional science rule about not talking about results until they’re published because, well, first of all, I think it’s a stupid rule [and he laughs] …” Results usually aren’t announced early for several reasons, not the least of which is that it hasn’t passed any peer review yet.

For example, I could do some ground-breaking research for a year and get this great result and then talk about it, or I could pass it by peers first only to have them discover that I’ve not accounted for some small factor that will dramatically reduce the significance.

Another reason is that preliminary results are just that – preliminary. One of my research projects at the moment is to generate a complete global database of Mars craters to ~1.5 km in diameter. I’ve done that now for about 30% of the planet. I could go ahead and release results and get more papers out of it, or I could wait until the whole thing is finished and I have all the statistics in place to back up my conclusions. This is especially necessary because Mars is not all the same, and craters from different regions have different properties, so me releasing early results that make broad conclusions could easily turn out to be fallacious once the entire project is done.

And as usual, Alex just seems to not get it. His results are going the way he thinks they should, so he’s releasing them early and claiming at least a cautious victory “so far.” This is also partly why I’m not a giant fan of his “open source science” — you really DO need training in science before you can do it “properly” — learning to take into account all these things that you may not otherwise, normally, think of.

That was really about releasing results early, and a little about peer-review.

Then, I was listening to the January 15, 2009, episode of Coast-to-Coast AM with Richard Hoagland. Among other things, he made the following statement: “You follow your curiosity, which is what science is supposed to be. It’s not supposed to be a club or a union or a pressure group that doesn’t want to get too far out of the box ’cause of what the other guys will think about you. … This concept of ‘peer review’ … is the thing which is killing science.”

It was with that line that I decided I should write this post.

Why We Have Peer-Review

Peer-review is important. The whole point of peer-review is so that your findings – your data and conclusions – are subjected to the review of your peers.

To use a reductio ad absurdum logic, if we didn’t have this process, then what anyone says is basically dogma with no chance of rebuttal. For example, if there weren’t a process of peer-review, I can say 2+2=1. You may say 2+2=5. And someone else may say 2+2=4. How would anyone know which is correct? The obvious answer in this contrived example is that everyone knows that 2+2=4. But how? Because you ask someone else, and they tell you? That’s peer-review.

In science, the purpose of peer-review is really just that. Your peers (other people who study what you study) look at your findings and make sure that in their opinion, you have followed the proper data gathering methods (so you took 2 apples and 2 oranges and laid them down as opposed to meditated and asked your spirit guide) and you reached the conclusions that are appropriate for the data you gathered (you then count all the pieces of fruit and come up with 4 instead of your spirit guide saying that 2+2 is really 7).

The purpose of peer-review is really nothing more than that, and it is nothing less than that.

Why Science Needs Peer-Review

It is often said that science is “self-correcting” over time. What this means is that if science has led to erroneous conclusions that did pass through the peers at the time, that ultimately the errors will be worked out because the process and data-collecting are repeated over and over again by others. A good example of this is gravity. Newton developed his Theory of Gravity. It was used for centuries. Repeated experiments showed it to be accurate.

But, some of them didn’t. Some showed slight deviations (like Mercury’s orbit). Then, another researcher came along (Einstein) and showed that Newton’s theory needed to be modified in order to account for ginormous masses and accelerations. Without the process of people reviewing predictions and measurements relevant to gravity, then we would not know that Newton didn’t have the whole picture. And even today, a century later, people are still testing Einstein’s theories, making more and more measurements to test them, subjecting them to the process of peer-review.

Hoagland’s Claims

I am not saying that these are representative of the general fringe community’s problems with peer-review, simply that they are what I have observed to be the general complaint. It’s fairly well-said by Richard Hoagland, this quote continuing from the one I ended with above:

“It’s not the peer review so much as the invisible, anonymous, peer-review. Basically, before a paper can get published, … you know you have to go through so many hurdles, and there’s so many chances for guys who have it ‘in for you,’ who don’t like you, or who don’t like the idea you’re trying to propose in a scientific publication, can basically … stick you in the back … and you never know!

“One of the tenants of the US Constitution … is that you have the right to confront your accuser. In the peer-review system, which has now been set up for science, … the scientist – which [sic] is basically on trial for an idea – because that’s what it is, by any other name it’s really a trial, is-is attacked by invisible accusers called ‘referees,’ who get a chance to shaft the idea, kill the idea, nix the paper, tell the editor of whatever journal, ‘Oh, this guy’s a total wacko …’ and you never have the opportunity to confront your accuser and demand that he be specific as to what he or she has found wrong with your idea.”

My Response to Hoagland

I don’t know what journals he’s talking about, but for all the ones I know of, his claims are wrong. Just as with the US court system, you have appeals in journals. If the first reviewer does not think your paper should get in, then you can ask the editor to get another opinion. You’re never sunk just because one reviewer doesn’t like you and/or your ideas.

As to the anonymity, while I personally don’t like it, it’s necessary. Without a referee having the ability to remain anonymous, they cannot always offer a candid opinion. They may be afraid of reprisals if they find errors (after all, grants are also awarded by peer-review). They may also not want to hurt someone’s feelings (as teenagers today are finding, it’s much easier to break up via Facebook or a txt message than in person — it’s the same with anonymity in peer-review). They may have their own work on the subject they think you should cite but don’t want to appear narcissistic in recommending it. In short, there are many very good reasons to remain anonymous to the author(s).

However, they are not anonymous to the editor or the editorial staff. If there are problems with a reviewer consistently shooting down ideas that they have an otherwise vested interest in, then the editors will see that and they will remove the reviewer.

I also want to point out something my officemate is fond of saying: “Science is not a democracy, it’s a meritocracy.” Not every idea deserves equal footing. If I come up with a new idea that explains the universe as being created by a giant potato with its all-seeing eyes (Dinosaurs fans, anyone?) then my new idea that I just made up should not deserve equal footing with the ones that are backed up by centuries of separate, independent evidence. The latter has earned its place, the former has not.

That is something that most fringe researchers seem to fail to grasp: Until they have indisputable evidence for their own ideas that cannot be otherwise easily explained by the current paradigm, then they should not necessarily be granted equal footing. Hoagland’s pareidollia of faces on Mars does not deserve an equal place next to descriptions of the martian atmosphere.

The Cons of Peer-Review

There are bad points to peer-review, though they really are only when there is an abuse of it. There is a faculty member back in my undergraduate institution who likes to tell the story of a young astronomer who submitted a paper about the value of the Hubble Constant (a measure of how rapidly the universe is expanding). The paper was sent to a reviewer who had his own ideas, and the young astronomer’s were not the same as his. So, he sat on the paper. He wrote a rebuttal to it. And he had the rebuttal published before he got to her paper.

That is an abuse of the system. I think that every scientist would admit that, and we strive to not be “that person.” After all, “that person” is now fairly blacklisted from polite astronomy society, and, as I’ve just done, people talk behind his back about him and how crummy he was.

In the vast sea of peer-review, however, there are just a few drops of “those people.” Most reviewer comments are helpful. They usually think of things you didn’t, and they only serve to make your results stronger.

Final Thoughts

The process of peer-review in science is an old one and one that is important to the essence of what science is and what it is supposed to do. If someone continuously complains about it, then the first thing you should do is to ask yourself what the motivation may be behind their ideas. Is it because they happened to get burned by one reviewer? Or is it perhaps because their ideas really don’t pass any scientific muster, they don’t fit with every other observation, and they require an extraordinary new premise to be true without sufficient evidence to back it up?

Create a free website or blog at WordPress.com.