Exposing PseudoAstronomy

December 29, 2012

2012 Psychic Predictions Roundup: Laypeople and Professionals Both Continue to Fail


Download the Predictions Roundup Document (PDF)

Introduction

Continuing a tradition that I started in 2010 and continued in 2011, I am posting a “psychic roundup” to celebrate the end of one Julian calendar year and bring in the next. In previous years, I have focused on Coast to Coast AM audience and professional predictions, and my conclusion has been, in one word: Bad. Average around 6% correct.

This year, I have branched out to other sources for three primary reasons. First, Coast has changed their format such that the audience predictions are more annoying and outlandish and it’s no longer one per person. Second, Coast is no longer doing a night or two of professional predictions where they bring in several guests per night to discuss the year ahead. It’s just a few people scattered over January. Third, last year, I was criticized for relying on Coast with people on some forums complaining that it wasn’t a good sample because no “reputable” person would go on the show anymore. I was also criticized for lumping different “kinds” of methods together, like astrologers with mediums.

So, I sniffed out seventeen other people who claim to make foresight-ful predictions who were not on Coast. I recorded their predictions, and I’ve scored them. I scored 549 predictions made by various people this year. If you want to just get right to ‘em, then see the link above or below. If you want more of a summary and a “how,” keep reading.

Download the Predictions Roundup Document (PDF)

People

Beyond the laypeople in the Coast audience, this year, the pros featured: Joseph Jacobs, Glynis McCants, Mark Lerner, Maureen Hancock, Paul Gercio, and John Hogue. The other 17 pros I looked at were: Concetta Bertoldi, Da Juana Byrd, Linda & Terri Jamison, Joseph Tittel, LaMont Hamilton, Carmen Harra, Judy Hevenly, Roxanne Hulderman, Blair Robertson, Pattie Canova, Cal Orey, Sasha Graham, Elaine Clayton, Denise Guzzardo, and Terry Nazon.

Many of these people are highly respected in their fields and charge a lot of money for readings (if they do readings). Let’s see how they did …

Scoring

I continued my tradition from last year with being somewhat strict in either calling something a miss or saying it was too vague or obvious or not a prediction. In one case, I had to call the “psychic” ignorant based on my reading of their prediction (that Antarctica would be found to have land under it?).

With that in mind, I was also what some may consider generous, giving some high probability hits (like Newt Gingrich would win the South Carolina primaries).

All numerical scores are the number of hits divided by the number of hits plus the number of misses. That means that predictions that were too vague/etc. were NOT counted against them, nor for them. The uncertainty is the square-root of the number of hits divided by the sum of the number of hits plus misses.

How They Did

I separated the folks into three groups: Coast audience, Coast professionals, and other professionals. Here’s how they did:

  • C2C Audience: 6.6±2.1%
  • C2C Pros: 15.6±7.0%
  • Other Pros: 7.5±1.7%

How They Did, Removing U.S. Presidential Election Stuff

The USA had a presidential election this year. About 3.3% of the predictions had specifically to do with who would run and be elected. These were pretty high-probability for the actual results followed what analysts were predicting months in advance.

So, to try to un-bias the predictions relative to previous years, I removed ALL predictions having to do with the either who would be the nominee on the Republican side or who would win the presidency. The results, and compared with previous years, are:

  • C2C Audience
    • 2012: 6.7±2.2% (4.7% too vague to score)
    • 2011: 5.8±2.3% (8.8% too vague to score)
    • 2010: 5.7±2.3%
  • C2C Pros
    • 2012: 13.8±6.9% (17.1% too vague to score)
    • 2011: 2.6±2.6% (39.0% too vague to score)
    • 2010: 11.5±4.3%
  • Other Professionals
    • 2012: 5.5±1.5% (27.1% too vague to score)

Several Conclusions from the Data

Note that these are discussed in more detail in the massive PDF file that lists all the predictions. For the shorter version …

First, I repeat this every year – and I predict that I’ll repeat it, in effect, next year – these “professionals” are NOT capable of telling the future any better than you or I, and some of them are in fact far worse.

Second, another thing I repeat every year and has held true this year, is that the pros are much vaguer than laypeople. On average, they’re a factor of around 3-5x vaguer in the sense that, percentage-wise, 3-5x more of their predictions are too vague to actually score. This means that they’re very easy to retrodict, after the event occurs, to claim accuracy. But, that “accuracy” is useless because it was not something that could be actionable when the “prediction” was made because it was so vague

Third, if the small numbers can be believed, the pros are better at setting aside their personal aspirations for politics — of the 12 predictions dropped because they were about the presidency, 1 hit and 2 misses were from the laypeople, while 7 hits and 3 misses were from pros. This indicates they got more right than the laypeople, which, while someone could point to that and say it proves they’re more psychic/intuitive/whatever, an objective person would look at that and point out that they were simply more likely to state what the polls and analysts were saying at the time.

Fourth, again if small numbers can be believed, when separating the pros into psychic-mediums, psychics, intuitives, and astrologers, the prediction rates were identical — except for the astrologers, who got 0. The only difference was that the psychics were much less vague, averaging around 19% unscorable versus about 35% unscorable for the others. I’ll have to watch that and see if it pans out in future years.

Scoring, Revisited

Before I wrap this up, I want to revisit the scoring and point out a major difference between the prognosticator and what I would consider an objective person looking to see if a “psychic” prediction is accurate or if it’s so vague that it can be retrodicted after the event to claim accuracy.

My example is Linda and Terri Jamison, the “Psychic Twins” who claim to be “psychic mediums.” They stated they see “one or two major schools being victimized by a young terrorist in the U.S.”

I consider that a miss. A terrorist is someone who commits their terrorism to create fear and panic, usually in the pursuit of political aims. By all accounts — except for the very conspiracy-minded, who unfortunately have been on C2C talking about this — Adam Lanza, the Sandy Hooke Elementary School shooter, was anti-social and disturbed. NOT a terrorist, not doing this for political gain, no cause in mind, and no greater demands for a group. To me, this is NOT a correct prediction for the twins. Sandy Hooke Elementary is – no offense – also not exactly what I would consider a “major school” (someone from Connecticut please correct me if I’m wrong).

However, I fully expect the twins to go out and claim that they predicted the Sandy Hooke shooting based on their above statement, just as they’ve been saying for over a decade they predicted the Sept. 11, 2001 terrorist attacks via the following exchange:

– Twin A: “We’re seeing a lot of natural disasters in terms of earthquakes and hurricanes, uh, blizzards and earthquakes coming up, especially in the next 10-12 years. A lot of activity like that because of global warming. We are seeing, uh, various terrorist attacks on Federal government, uh, excuse me — Federal buildings, um –”
– Twin 1: “– yeah, particularly, uh, South Carolina or Georgia.”
– Art Bell: “Really.”
– Twin 1: “Uh, by July 2002, and also uh, the New York Trade Center, the World Trade Center in 2002.”
– Art Bell: “Really.”
– Twin 1: “Uh, with something with a terrorist attack and, um, yeah, so that’s pretty much it.”

That is their claim for predicting the Sept. 11, 2001, terrorist attacks. I consider it a miss. But that’s a future blog post.

Final Thoughts

That about wraps it up for this year. I’m not going to repeat my small tirade from last year against the amount of money people waste on these professionals. I’ll just ask that you look at the data: They don’t do any better than you.

I’ll also ask that if you found this at all useful or interesting, please help spread the word through Twitter, Facebook, e-mail, message boards, your favorite podcast (unless it’s mine, in which case I already know), etc. A lot of work went into it, and as far as I know, this is one of the most comprehensive looks at predictions for 2012 (and thanks again to Matt T. for help on scoring several items).

Also, if I got anything wrong, please let me know by posting in the comments or sending me an e-mail.

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,139 other followers