Exposing PseudoAstronomy

May 26, 2013

Properly Designing an Experiment to Measure Richard Hoagland’s Torsion Field, If It Were Real


Introduction

Warning: This is a long post, and it’s a rough draft for a future podcast episode. But it’s something I’ve wanted to write about for a long time.

Richard C. Hoagland has claimed now for at least a decade that there exists a “hyperdimensional torsion physics” which is based partly on spinning stuff. In his mind, the greater black governmental forces know about this and use it and keep it secret from us. It’s the key to “free energy” and anti-gravity and many other things.

Some of his strongest evidence is based on the frequency of a tuning fork inside a 40+ year-old watch. The purpose of this post is to assume Richard is correct, examine how an experiment using such a watch would need to be designed to provide evidence for his claim, and then to examine the evidence from it that Richard has provided.

Predictions

Richard has often stated, “Science is nothing if not predictions.” He’s also stated, “Science is nothing if not numbers” or sometimes “… data.” He is fairly correct in this statement, or at least the first and the last: For any hypothesis to be useful, it must be testable. It must make a prediction and that prediction must be tested.

Over the years, he has made innumerable claims about what his hyperdimensional or torsion physics “does” and predicts, though most of his predictions have come after the observation which invalidates them as predictions, or at least it renders them useless.

In particular, for this experiment we’re going to design, Hoagland has claimed that when a mass (such as a ball or planet) spins, it creates a “torsion field” that changes the inertia of other objects; he generally equates inertia with masss. Inertia isn’t actually mass, it’s the resistance of any object to a change in its motion. For our purposes here, we’ll even give him the benefit of the doubt, as either one is hypothetically testable with his tuning fork -based watch.

So, his specific claim, as I have seen it, is that the mass of an object will change based on its orientation relative to a massive spinning object. In other words, if you are oriented along the axis of spin of, say, Earth, then your mass will change one way (increase or decrease), and if you are oriented perpendicular to that axis of spin, your mass will change the other way.

Let’s simplify things even further from this more specific claim that complicates things: An object will change its mass in some direction in some orientation relative to a spinning object. This is part of the prediction we need to test.

According to Richard, the other part of this prediction is that to actually see this change, big spinning objects have to align in order to increase or decrease the mass from what we normally see. So, for example, if your baseball is on Earth, it has its mass based on it being on Earth as Earth is spinning the way it does. But, if, say, Venus aligns with the sun and transits (as it did back in July 2012), then the mass will change from what it normally is. Or, like during a solar eclipse. This is the other part of the prediction we need to test.

Hoagland also has other claims, like you have to be at sacred or “high energy” sites or somewhere “near” ±N·19.5° on Earth (where N is an integer multiple, and “near” means you can be ±8° or so from that multiple … so much for a specific prediction). For example, this apparently justifies his begging for people to pay for him and his significant other to go to Egypt last year during that Venus transit. Or taking his equipment on December 21, 2012 (when there wasn’t anything special alignment-wise…) to Chichen Itza, or going at some random time to Stonehenge. Yes, this is beginning to sound even more like magic, but for the purposes of our experimental design, let’s leave this part alone, at least for now.

Designing an Experiment: Equipment

“Expat” goes into much more detail on the specifics of Hoagland’s equipment, here.

To put it briefly, Richard uses a >40-year-old Accutron watch which has a small tuning fork in it that provides the basic unit of time for the watch. A tuning fork’s vibration rate (the frequency) is dependent on several things, including the length of the prongs, material used, and its moment of inertia. So, if mass changes, or its moment of inertia changes, then the tuning fork will change frequency. Meaning that the watch will run either fast or slow.

The second piece of equipment is a laptop computer, with diagnostic software that can read the frequency of the watch, and a connection to the watch.

So, we have the basic setup with a basic premise: During an astronomical alignment event, Hoagland’s Accutron watch should deviate from its expected frequency.

Designing an Experiment: Baseline

After we have designed an experiment and obtained equipment, usually the bulk of time is spent testing and calibrating that equipment. That’s what would need to be done in our hypothetical experiment here.

What this means is that we need to look up when there are no alignments that should affect our results, and then hook the watch up to the computer and measure the frequency. For a long time. Much longer than you expect to use the watch during the actual experiment.

You need to do this to understand how the equipment acts under normal circumstances. Without that, you can’t know if it acts differently – which is what your prediction is – during your time when you think it should. For example, let’s say that I only turn on a special fancy light over my special table when I have important people over for dinner. I notice that it flickers every time. I conclude that the light only flickers when there are important people there. Unfortunately, without the baseline measurement (turning on the light when there AREN’T important people there and seeing if it flickers), then my conclusion is invalidated.

So, in our hypothetical experiment, we test the watch. If it deviates at all from the manufacturer’s specifications during our baseline measurements (say, a 24-hour test), then we need to get a new one. Or we need to, say, make sure that the cables connecting the watch to the computer are connected properly and aren’t prone to surges or something else that could throw off the measurement. Make sure the software is working properly. Maybe try using a different computer.

In other words, we need to make sure that all of our equipment behaves as expected during our baseline measurements when nothing that our hypothesis predicts should affect it is going on.

Lots of statistical analyses would then be run to characterize the baseline behavior to compare with the later experiment and determine if it is statistically different.

Designing an Experiment: Running It

After we have working equipment, verified equipment, and a well documented and analyzed baseline, we then perform our actual measurements. Say, turn on our experiment during a solar eclipse. Or, if you want to follow the claim that we need to do this at some “high energy site,” then you’d need to take your equipment there and also get a baseline just to make sure that you haven’t broken your equipment in transit or messed up the setup.

Then, you gather your data. You run the experiment in the exact same way as you ran it before when doing your baseline.

Data Analysis

In our basic experiment, with our basic premise, the data analysis should be fairly easy.

Remember that the prediction is that, during the alignment event, the inertia of the tuning fork changes. Maybe it’s just me, but based on this premise, here’s what I would expect to see during the transit of Venus across the sun (if the hypothesis were true): The computer would record data identical to the baseline while Venus is away from the sun. When Venus makes contact with the sun’s disk, you would start to see a deviation that would increase until Venus’ disk is fully within the sun’s. Then, it would be at a steady, different value from the baseline for the duration of the transit. Or perhaps increase slowly until Venus is most inside the sun’s disk, then decreasing slightly until Venus’ limb makes contact with the sun’s. Then you’d get a rapid return to baseline as Venus’ disk exits the sun’s and you’d have a steady baseline thereafter.

If the change is very slight, this is where the statistics come in: You need to determine whether the variation you see is different enough from baseline to be considered a real effect. Let’s say, for example, during baseline measurements the average frequency is 360 Hz but that it deviates between 357 and 363 fairly often. So your range is 360±3 Hz (we’re simplifying things here). You do this for a very long time, getting, say, 24 hrs of data and you take a reading every 0.1 seconds, so you have 864,000 data points — a fairly large number from which to get a robust statistical average.

Now let’s say that from your location, the Venus transit lasted only 1 minute (they last many hours, but I’m using this as an example; bear with me). You have 600 data points. You get results that vary around 360 Hz, but it may trend to 365, or have a spike down to 300, and then flatten around 358. Do you have enough data points (only 600) to get a meaningful average? To get a meaningful average that you can say is statistically different enough from 360±3 Hz that this is a meaningful result?

In physics, we usually use a 5-sigma significance, meaning that, if 360±3 Hz represents our average ± 1 standard deviation (1 standard deviation means that about 68% of the datapoints will be in that range), then 5-sigma is 360±15 Hz. 5-sigma means that 99.999927% of the data will be in that range. This means that, to be a significant difference, we have to have an average during the Venus transit of, say, 400±10 Hz (where 1-sigma = 2 here, so 5-sigma = 10 Hz).

Instead, in the scenario I described two paragraphs ago, you’d probably get an average around 362 with a 5-sigma of ±50 Hz. This is NOT statistically significant. That means the null hypothesis – that there is no hyperdimensional physics -driven torsion field – must be concluded.

How could you get better statistics? You’d need different equipment. A turning fork that is more consistently 360 Hz (so better manufacturing = more expensive). A longer event. Maybe a faster reader so instead of reading the turning fork’s frequency every 0.1 seconds, you can read it every 0.01 seconds. Those are the only ways I can think of.

Repeat!

Despite what one may think or want, regardless of how extraordinary one’s results are, you have to repeat them. Over and over again. Preferably other, independent groups with independent equipment does the repetition. One experiment by one person does not a radical change in physics make.

What Does Richard Hoagland’s Data Look Like?

I’ve spent an excruciating >1700 words above explaining how you’d need to design and conduct an experiment with Richard’s apparatus and the basic form of his hypothesis. And why you have to do some of those more boring steps (like baseline measurements and statistical analysis).

To-date, Richard claims to have conducted about ten trials. One was at Coral Castle in Florida back I think during the 2004 Venus transit, another was outside Alburqueque in New Mexico during the 2012 Venus transit. Another in Hawai’i during a solar eclipse, another at Stonehenge during something, another in Mexico during December 21, 2012, etc., etc.

For all of these, he has neither stated that he has performed baseline measurements, nor has he presented any such baseline data. So, right off the bat, his results – whatever they are – are meaningless because we don’t know how his equipment behaves under normal circumstances … I don’t know if the light above my special table flickers at all times or just when those important people are over.

He also has not shown all his data, despite promises to do so.

Here’s one plot that he says was taken at Coral Castle during the Venus transit back in 2004, and it’s typical of the kinds of graphs he shows, though this one has a bit more wiggling going on:

My reading of this figure shows that his watch appears to have a baseline frequency of around 360 Hz, as it should. The average, however, states to be 361.611 Hz, though we don’t know how long that’s an average. The instability is 12.3 minutes per day, meaning it’s not a great watch.

On the actual graph, we see an apparent steady rate at around that 360 Hz, but we see spikes in the left half that deviate up to around ±0.3 Hz, and then we see a series of deviations during the time Venus is leaving the disk of the sun. But we see that the effect continues AFTER Venus is no longer in front of the sun. We see that it continues even more-so than during that change from Venus’ disk leaving the sun’s and more than when Venus was in front of the sun. We also see that the rough steady rate when Venus is in front of the sun is the same Hz as the apparent steady rate when Venus is off the sun’s disk.

From the scroll bar at the bottom, we can also see he’s not showing us all the data he collected, that he DID run it after Venus exited the sun’s disk, but we’re only seeing a 1.4-hr window.

Interestingly, we also have this:

Same location, same Accutron, some of the same time, same number of samples, same average rate, same last reading.

But DIFFERENT traces that are supposed to be happening at the same time! Maybe he mislabeled something. I’d prefer not to say that he faked his data. At the very least, this calls into question A LOT of his work in this.

What Conclusions Can Be Drawn from Richard’s Public Data?

None.

As I stated above, the lack of any baseline measurements automatically mean his data is useless because we don’t know how the watch acts under “normal” circumstances.

That aside, looking at his data that he has released in picture form (as in, we don’t have something like a time-series text file we can graph and run statistics on), it does not behave as one would predict from Richard’s hypothesis.

Other plots he presents from other events show even more steady state readings and then spikes up to 465 Hz at random times during or near when his special times are supposed to be. None of those are what one would predict from his hypothesis.

What Conclusions does Richard Draw from His Data?

“stunning ‘physics anomalies'”

“staggering technological implications of these simple torsion measurements — for REAL ‘free energy’ … for REAL ‘anti-gravity’ … for REAL ‘civilian inheritance of the riches of an entire solar system …'”

“These Enterprise Accutron results, painstakingly recorded in 2004, now overwhelmingly confirm– We DO live in a Hyperdimensional Solar System … with ALL those attendant implications.”

Et cetera.

Final Thoughts

First, as with all scientific endeavors, please let me know if I’ve left anything out or if I’ve made a mistake.

With that said, I’ll repeat that this is something I’ve been wanting to write about for a long time, and I finally had the three hours to do it (with some breaks). The craziness of claiming significant results from what – by all honest appearances – looks like a broken watch is the height of gall, ignorance, or some other words that I won’t say.

With Richard, I know he knows better because it’s been pointed out many times that what he needs to do to make his experiment valid.

But this also gets to a broader issue of a so-called “amateur scientist” who may wish to conduct an experiment to try to “prove” their non-mainstream idea: They have to do this extra stuff. Doing your experiment and getting weird results does not prove anything. This is also why doing science is hard and why maybe <5% of it is the glamorous press release and cool results. So much of it is testing, data gathering, and data reduction and then repeating over and over again.

Richard (and others) seem to think they can do a quick experiment and then that magically overturns centuries of "established" science. It doesn't.

Advertisements

December 29, 2012

2012 Psychic Predictions Roundup: Laypeople and Professionals Both Continue to Fail


Download the Predictions Roundup Document (PDF)

Introduction

Continuing a tradition that I started in 2010 and continued in 2011, I am posting a “psychic roundup” to celebrate the end of one Julian calendar year and bring in the next. In previous years, I have focused on Coast to Coast AM audience and professional predictions, and my conclusion has been, in one word: Bad. Average around 6% correct.

This year, I have branched out to other sources for three primary reasons. First, Coast has changed their format such that the audience predictions are more annoying and outlandish and it’s no longer one per person. Second, Coast is no longer doing a night or two of professional predictions where they bring in several guests per night to discuss the year ahead. It’s just a few people scattered over January. Third, last year, I was criticized for relying on Coast with people on some forums complaining that it wasn’t a good sample because no “reputable” person would go on the show anymore. I was also criticized for lumping different “kinds” of methods together, like astrologers with mediums.

So, I sniffed out seventeen other people who claim to make foresight-ful predictions who were not on Coast. I recorded their predictions, and I’ve scored them. I scored 549 predictions made by various people this year. If you want to just get right to ’em, then see the link above or below. If you want more of a summary and a “how,” keep reading.

Download the Predictions Roundup Document (PDF)

People

Beyond the laypeople in the Coast audience, this year, the pros featured: Joseph Jacobs, Glynis McCants, Mark Lerner, Maureen Hancock, Paul Gercio, and John Hogue. The other 17 pros I looked at were: Concetta Bertoldi, Da Juana Byrd, Linda & Terri Jamison, Joseph Tittel, LaMont Hamilton, Carmen Harra, Judy Hevenly, Roxanne Hulderman, Blair Robertson, Pattie Canova, Cal Orey, Sasha Graham, Elaine Clayton, Denise Guzzardo, and Terry Nazon.

Many of these people are highly respected in their fields and charge a lot of money for readings (if they do readings). Let’s see how they did …

Scoring

I continued my tradition from last year with being somewhat strict in either calling something a miss or saying it was too vague or obvious or not a prediction. In one case, I had to call the “psychic” ignorant based on my reading of their prediction (that Antarctica would be found to have land under it?).

With that in mind, I was also what some may consider generous, giving some high probability hits (like Newt Gingrich would win the South Carolina primaries).

All numerical scores are the number of hits divided by the number of hits plus the number of misses. That means that predictions that were too vague/etc. were NOT counted against them, nor for them. The uncertainty is the square-root of the number of hits divided by the sum of the number of hits plus misses.

How They Did

I separated the folks into three groups: Coast audience, Coast professionals, and other professionals. Here’s how they did:

  • C2C Audience: 6.6±2.1%
  • C2C Pros: 15.6±7.0%
  • Other Pros: 7.5±1.7%

How They Did, Removing U.S. Presidential Election Stuff

The USA had a presidential election this year. About 3.3% of the predictions had specifically to do with who would run and be elected. These were pretty high-probability for the actual results followed what analysts were predicting months in advance.

So, to try to un-bias the predictions relative to previous years, I removed ALL predictions having to do with the either who would be the nominee on the Republican side or who would win the presidency. The results, and compared with previous years, are:

  • C2C Audience
    • 2012: 6.7±2.2% (4.7% too vague to score)
    • 2011: 5.8±2.3% (8.8% too vague to score)
    • 2010: 5.7±2.3%
  • C2C Pros
    • 2012: 13.8±6.9% (17.1% too vague to score)
    • 2011: 2.6±2.6% (39.0% too vague to score)
    • 2010: 11.5±4.3%
  • Other Professionals
    • 2012: 5.5±1.5% (27.1% too vague to score)

Several Conclusions from the Data

Note that these are discussed in more detail in the massive PDF file that lists all the predictions. For the shorter version …

First, I repeat this every year – and I predict that I’ll repeat it, in effect, next year – these “professionals” are NOT capable of telling the future any better than you or I, and some of them are in fact far worse.

Second, another thing I repeat every year and has held true this year, is that the pros are much vaguer than laypeople. On average, they’re a factor of around 3-5x vaguer in the sense that, percentage-wise, 3-5x more of their predictions are too vague to actually score. This means that they’re very easy to retrodict, after the event occurs, to claim accuracy. But, that “accuracy” is useless because it was not something that could be actionable when the “prediction” was made because it was so vague

Third, if the small numbers can be believed, the pros are better at setting aside their personal aspirations for politics — of the 12 predictions dropped because they were about the presidency, 1 hit and 2 misses were from the laypeople, while 7 hits and 3 misses were from pros. This indicates they got more right than the laypeople, which, while someone could point to that and say it proves they’re more psychic/intuitive/whatever, an objective person would look at that and point out that they were simply more likely to state what the polls and analysts were saying at the time.

Fourth, again if small numbers can be believed, when separating the pros into psychic-mediums, psychics, intuitives, and astrologers, the prediction rates were identical — except for the astrologers, who got 0. The only difference was that the psychics were much less vague, averaging around 19% unscorable versus about 35% unscorable for the others. I’ll have to watch that and see if it pans out in future years.

Scoring, Revisited

Before I wrap this up, I want to revisit the scoring and point out a major difference between the prognosticator and what I would consider an objective person looking to see if a “psychic” prediction is accurate or if it’s so vague that it can be retrodicted after the event to claim accuracy.

My example is Linda and Terri Jamison, the “Psychic Twins” who claim to be “psychic mediums.” They stated they see “one or two major schools being victimized by a young terrorist in the U.S.”

I consider that a miss. A terrorist is someone who commits their terrorism to create fear and panic, usually in the pursuit of political aims. By all accounts — except for the very conspiracy-minded, who unfortunately have been on C2C talking about this — Adam Lanza, the Sandy Hooke Elementary School shooter, was anti-social and disturbed. NOT a terrorist, not doing this for political gain, no cause in mind, and no greater demands for a group. To me, this is NOT a correct prediction for the twins. Sandy Hooke Elementary is – no offense – also not exactly what I would consider a “major school” (someone from Connecticut please correct me if I’m wrong).

However, I fully expect the twins to go out and claim that they predicted the Sandy Hooke shooting based on their above statement, just as they’ve been saying for over a decade they predicted the Sept. 11, 2001 terrorist attacks via the following exchange:

– Twin A: “We’re seeing a lot of natural disasters in terms of earthquakes and hurricanes, uh, blizzards and earthquakes coming up, especially in the next 10-12 years. A lot of activity like that because of global warming. We are seeing, uh, various terrorist attacks on Federal government, uh, excuse me — Federal buildings, um –”
– Twin 1: “– yeah, particularly, uh, South Carolina or Georgia.”
– Art Bell: “Really.”
– Twin 1: “Uh, by July 2002, and also uh, the New York Trade Center, the World Trade Center in 2002.”
– Art Bell: “Really.”
– Twin 1: “Uh, with something with a terrorist attack and, um, yeah, so that’s pretty much it.”

That is their claim for predicting the Sept. 11, 2001, terrorist attacks. I consider it a miss. But that’s a future blog post.

Final Thoughts

That about wraps it up for this year. I’m not going to repeat my small tirade from last year against the amount of money people waste on these professionals. I’ll just ask that you look at the data: They don’t do any better than you.

I’ll also ask that if you found this at all useful or interesting, please help spread the word through Twitter, Facebook, e-mail, message boards, your favorite podcast (unless it’s mine, in which case I already know), etc. A lot of work went into it, and as far as I know, this is one of the most comprehensive looks at predictions for 2012 (and thanks again to Matt T. for help on scoring several items).

Also, if I got anything wrong, please let me know by posting in the comments or sending me an e-mail.

January 5, 2012

2011 Psychic Predictions Roundup: Audience and Professionals on Coast to Coast AM Majorly Fail … Again


Introduction

Last year, in what rapidly became a very well-read post, I wrote about the “psychic” predictions for 2010 by the audience and pros from the Coast to Coast AM late-night radio program. After reviewing nearly 200 predictions, my conclusion was that the audience did no better than the pros, and that both did miserably.

With a record number of Tweets and Facebook postings, how could I not do another analysis for 2011?

I’m a bit behind, but I’ve finally compiled the audience and professional predictions for 2011 that were made on C2C and I have scored them, as well.

So without further delay: The Predictions (PDF)! Please let me know if you find any mistakes in scoring, and I will correct them. If you enjoy this, please be sure to rate it (those stars at the top), leave feedback, and/or link to it from your portal, forum, social media, and/or wikis of choice! It’s the only way I know that it’s worth going through the many days’ of work to compile these.

Before We Get to Details … Scoring

I was a bit stricter this year in terms of what I counted as a “hit.” For example, Major Ed Dames stated, “Buy gold and silver if you can … because those commodities will be worth something.” I counted that as a miss as opposed to too vague. True, gold closed roughly $150 higher at the end of 2011 than it opened. If he has simply said “Gold will be up by the end of the year,” I would give him a hit (if an obvious one). But he said both gold and silver, and silver went down by $2.50 over 2011. On the other hand, he simply said they “will be worth something.” I interpreted that to mean as they will go up. Otherwise, taken at strict face value, this is like saying “Bread is something you can eat.” It’s just a statement of fact.

As with last year, I wrote down what predictions I could pull out of the professionals (more on that later). Many of them, however, were too vague or obvious – I considered – to be scorable. For example, Linda Shurman stated, “People are going to come out of their collective coma” because of the transit of Uranus in Pisces. I considered that too vague to be a hit or a miss. Similarly, Joseph Jacobs stated there would be rough times in Somalia. It does not take a claimed psychic to say there will be rough times in Somalia, so I did not score that.

Coast to Coast AM Audience

Every year, Art Bell would do the predictions show on December 30 and 31 for a “full” eight hours of predictions from the audience. He would have strict rules – one prediction per call, one call per year, nothing political rant-like, no soliciting, and Art numbered them. With Art having unofficially/officially retired (again) after the “Ghost to Ghost” 2010 show, Ian Punnett took over and, well, he wasn’t Art. He didn’t follow any of Art’s rules. This made the predictions a bit more annoying to figure out and write down, but I tried. Sometimes there were two per caller.

In the end, I counted 114 distinct predictions. 6 of them were hits, 99 misses, and 9 were non-scorable as too vague, obvious, or not for 2011. That’s a hit rate of 5.7% (6/(114-9)≈0.057). Very impressively, that’s the same rate as I gave the audience in 2010, so, huzzah for consistency!

Here are some of my favorites:

11. Subterranean tunnels will be found, huge caverns, a “huge city-like thing,” under America or the Russia-Asia continent. “This could lead to the big foot theories being solved.”

23. Within the Bilderburger / Illuminati, there will “be a wild sex slavery factory where blond-haired teenage girls are enslaved to make Illuminati babies they’re trying to create the perfect race. There will be sex slavery.” This will be revealed this year when someone is “caught red-handed with these girls.”

27. Synchronized walking will become very popular, such as in malls, with people walking in formation.

73. There will be a Christian worldwide movement that starts in the US around the time of the Super Bowl. They will force ABC/NBC/CBS/FOX to show Biblical stories.

Coast to Coast AM Professionals

Yes, as a skeptic we always say “alleged” psychic or whatever. I’ve done that enough in the intro and we’ll just go with their titles. Pages 14-25 of the predictions document list the different people that C2C had on for 2011 predictions.

I’ll state that, like the audience ones, these predictions were not as easy to record this year as they were for 2010. Instead of having the first few days of 2011 be devoted to several of these people, George had them scattered throughout the month of January and then did another set in July with three people. So, I recorded what I could.

The people involved were:

  • Jerome Corsi (Claim: General Conspiracist)
  • Joseph Jacobs (Claim: Psychic)
  • Major Ed Dames (Claim: Remote Viewer)
  • Linda Schurman (Claim: Astrology)
  • Starfire Tor (Claim: Psychic -> “Psi Data Downloads”)
  • Glynis McCants (Claim: Numerology)
  • John Hogue (Claim: Nostradamus Interpretor, Psychic)
  • Maureen Hancock (Claim: Psychic and Medium)
  • Angela Moore (Claim: Psychic)

All in all, they made a total of 64 predictions. I counted one hit, 38 misses, and fully 25 that were too vague or obvious to grant a hit or miss to. That’s a hit rate of 2.6% (Joseph Jacobs got the one hit by saying perhaps the obvious “I see maybe a temporary measure as far as lifting the debt ceiling”). That’s somewhat worse than 2010, when I gave them a combined (if generous) hit rate of 11.5%, for getting 6 correct out of 53.

Here are some of my favorites (there were many more from Starfire Tor, but you’ll have to read the document for more):

Joseph Jacobs: We’ll be “getting closer and closer to [UFO] disclosure.”

Major Ed Dames: We’re right at the cusp of a global flu pandemic that WILL happen in 2011.

Starfire Tor: Earthquakes continuing to accelerate due to the time shifts and time wars.

Starfire Tor: “You are going to see an advancement of the whale and people project … . It’s gonna be an agreeable movement around the world where cetaceans – whales and dolphins – who are self-aware are actually non-human people. So the status of them is going to change from ‘animal’ to ‘person,’ therefore people are going to have to stop killing them, and this is going to – every country every people in the world are going to have the opportunity to understand that there is more to intelligent life on the planet than humans.”

Maureen Hancock: “Decent relief” from high gas prices. “I see it coming down to at least a buck a gallon by November” in New England.

Differences Between Lay People and Pros

I brought this up last year, but it definitely bears repeating this year. The audience made 114 predictions and 9 (8%) of them were too vague or obvious to score. The pros made 64, and 25 (39%) of them were too vague or obvious to score.

That is a classic difference between a lay person and a “pro” in the business of telling people what they think the future will bring. Normal people will generally give you unqualified – if seemingly outlandish – statements. Such as, “The Saints will win the Super Bowl.” The pros will give you qualified vagaries, such as, “If the Saints do well and live up to their potential, I see them as possible winners of the Super Bowl since Mars in Virgo is favorable to them.” Okay, that might be a slight exaggeration, but let’s look back on some real examples:

Audience: We’ll see “a Clinton” for VP this year.

Professional: There will be new manufacturing ideas here in the US, opening doors for the unemployed.

Audience: A private research company without federal funding will start to clone people for organ harvesting.

Professional: In response to a question about the Carolinas being hit by a hurricane in the fall: “That is a possibility.”

See? This is also why they can stay in business. I’m fairly strict in my scoring. Someone who paid an alleged psychic $25 for a reading, remembering what the psychic said two weeks later, will be very likely to easily retrodict what the psychic said into a “hit” rather than a miss.

Take John Hogue’s, “Get ready for mother nature to be on the warpath.” I said that’s too vague to score. Let’s say he said that a month before Hurricane Irene hit New York in 2011. Most would count that as a “hit,” and they would not put it in context of Irene being only a Category 3, only doing $10 billion in damage, and Hogue not stating that the year of Hurricane Katrina when it’s much more apt.

No, this is not a rant, and I apologize if it comes off as one. I’m trying to point out why these people are still in business when they are no better than, sometimes worse than, and frequently more vague than the average person making a prediction. And with that in mind, let’s see … Joseph Jacobs charges $90 for 30 minutes, $150 per hour for readings. Maureen Hancock has her own TV show. Ed Dames sells kits on remote viewing, and most of these people sell books and other things. Maybe I should start selling my scoring of their predictions.

Final Thoughts

To continue from the above before transitioning back to the “fun,” yes, there is a substantial “where’s the harm” issue whenever we give these alleged soothsayers the power to make decisions for us based on vague statements. I point that out because it’s important.

But I also want to get back to this because I think they’re funny. I posted on Facebook a few nights ago, “Is it wrong for me to take distinct delight when alleged ‘psychics’ who are well known get things incredibly wrong?” I enjoy shaking my head at all these people being shown to be the shams they are.

And I enjoy the, well, I’ll just say “out there” predictions that make it through. Obama being a reptilian? Whales and dolphins being considered “people”? (Don’t get me wrong, I don’t like whaling and dolphining, etc., but let’s not go crazy.) When you hear some of these, you just have to roll your eyes.

And hopefully when you hear some of these that don’t sound quite as crazy, you’ll pay attention to and notice some of the tricks of the trade, and not spend your hard-earned money on something you could come up with on your own.

 

P.S. I realize that WordPress has a habit of adding Google Ads to posts for those who are not ‘pressers and due to the content of this post, most of the ads are probably for psychic or astrologic readings. I’m looking into the potentiality of migrating my blog to my own server so I won’t have to deal with all of that, but I’m afraid of losing Google rankings and all the link backs that I’ve established over the past ~3.5 years. If someone is knowledgable in how to preserve all those with redirects, etc., please get in contact with me.

P.P.S. Looking forward to 2012, if anyone has found a psychic/numerologist/astrologer/medium/whatever who has put out specific predictions, I’d like to extend beyond C2C for my tallies. Let me know in the comments or by e-mail of these and I’ll look into them.

December 28, 2010

2010 Psychic Predictions Roundup: Audience and Professionals on Coast to Coast AM Majorly Fail


Introduction

Every year, the late-night number-one-rated four-hour radio show Coast to Coast AM spends December 30 and 31 taking “psychic predictions” from the audience, and January 1 with invited “psychics” for predictions for 2010. I had a lot of free time while taking pictures at the telescopes in early January so I listened diligently to all 12 hours and recorded every prediction.

Let’s see how they did, shall we?

Edited to Add: It’s come to my attention (Oct. 2011) that Cal Orey (see the “Professionals” section below) has this post listed on her homepage as me indicating that she was the highest hit-rate “psychic” on Coast to Coast for 2010 predictions. I’ll repeat here what I do below: She was highest because she got 1 right out of 3 that I considered specific enough to actually judge; the other 6 were too vague or obvious to refute or deny. One correct prediction about an earthquake in California is not something that I, personally, would be bragging about. But I’m happy to have her link to my blog.

Audience

Art Bell ran the audience nights and he was very specific: One prediction per customer per year, and no predictions about assassinations, politically-motivated, nor abstract religious ideas would be taken. This year, there were a total of 110 predictions that were recorded. I actually recorded all the ones that made it to air, so in the document I link to below, you will see some items crossed out. Those are ones that Art did not record. My own comments are included in [square brackets] and are things that were not said on the show.

Click here for the PDF with all the audience predictions.

I have now gone through and – with a little help on some items I didn’t know about, scored them. First off, there were 5 predictions that I considered too vague or not actually for 2010, so that gets us down to 105 predictions. Based on my information, 6 came true. That’s a hit rate of 5.7±2.3%. (Uncertainty is calculated by taking the square-root of the number of counts and dividing by the total — this is standard Poisson statistics.)

Here are some of my favorites:

14. Obama goes live on NBC saying that aliens do exist and there will be an alien with him who speaks to the whole world.

16. A lot of people who are handicapped will get out of their wheelchairs and will walk again. (Qualifier: “If they truly believe.”)

26. Re-discovery, by September, of the entrance to the hollow Earth at the North Pole.

52. God is actually a being of light and he is moving back towards us at the speed of light. The result is that he’ll send a laser pulse in that direction and tell us what a bad job everyone’s doing.

81. A celebrity will be exposed as a cannibal.

And my all-time favorite … one of the only hits: 102. There will be no really big changes, it’ll be “pretty much the same-old-same-old.” There’ll be some crises, medical advances, etc., but that’s what happens every year.

Professionals

As a skeptic, I will admit that I derive great joy in seeing professional purveyors of woo resoundingly fail. And the “professionals” that C2C invited on did just that, none with a hit rate above 33%, and that high one was by virtue of only making 3 specific enough predictions to score.

Click here for the PDF with all the “professional” predictions.

In scoring these, I think I was fairly generous, as you may note if you look at the document linked above.

Edited to Add: The percentage correct that I list below are based on (# correct) / ((# predictions) – (# too vague)). I add this because I noticed some confusion on how I gave Orey 33% instead of 11% (1/(9-6) vs. 1/9).

To summarize, here are the scores for each person:

  • Christian von Lahr: 3 out of 15 with 1 too vague for 21%.
  • Paul Guercio: 0 out of 6 with 2 too vague for 0%.
  • Glynis McCants: 0 out of 9 with 8 too vague for 0%.
  • Tana Hoy: 1 out of 16 with 5 too vague for 9%.
  • Cal Orey: 1 out of 9 with 6 too vague for 33%.
  • Terry and Linda Jamison: 1 out of 17 with 5 too vague for 8%.
  • Mark Lerner: 0 out of 5 with 4 too vague for 0%.
  • Jeffrey Wands: 1 out of 16 with 1 too vague for 7%.

The combined generous hit rate was 11.5±4.3%. This is statistically identical to the audience’s hit rate. The one who got the most right was Christian von Lahr with 3, though due to small numbers because of incredible vagueness or obviousness, Cal Orey came out on top percentage-wise.

A trend you will note if you look at the document linked above is that the pros were all, in general, fairly vague in their predictions (fully 1/3 of them were unusable). Or, they were incredibly obvious to the point that they couldn’t be used to score any “psychic-ness.” For example, Cal Orey “predicted” that Italy will have “another quake.” Well, considering that there are tens of thousands of earthquakes of magnitudes >4.0 every year across the planet, this is like saying, “During 2010, the sun will appear in the sky,” or “a politician will tell a lie or half-truth.” Duh.

Some of my favorites were:

von Lahr: Something really big with one of Obama’s daughters involving the letters “P,” “I,” “N,” and “K.” Note that the letters may have spiritual meaning instead or be turned, like the “P” into a “b,” “d,” “6,” or “9.” It could also look like a bed or a wheelbarrow [so, basically you can retrodict anything to this]. The letters are also in the word, “kidnap.”

Orey: If San Francisco gets another quake in 2010, Arnold won’t be very happy.

Lerner: There won’t be a catastrophe.

The one that ticked me off the most was, by far, Tana Hoy, who, if you were/are able to listen, almost seemed scared that we all knew he was just making things up. He started off the interview by calling the host, Ian Punnett, “Ryan,” and then stated obvious things that had already been announced.

The pair that I thought were most full of themselves were the “psychic twins,” Terry and Linda Jamison. They started the interview by claiming that everything they predicted for 2009 had come true, and when they were on later in 2010, they claimed that everything they had predicted in January would still come true. I couldn’t find a C2C interview they did for 2009, but I found one for 2000.

On November 2, 1999, they claimed AIDS would be cured by 2002, “breast cancer drug break-through by 2003,” “a cancer cure, especially for breast cancer by 2007,” 60% of cancer cured by 2008, a cloning of body parts “in the not too distant future … in diagnostic chambers,” and people with cerebral palsy, muscular dystrophy, MS, and spinal cord injuries will be walking “within the decade.” Yeah …. didn’t quite happen. And by my tally, they only had one hit for 2010, and it was incredibly vague but I gave it to them. They had some monstrous fails, such as shiitake mushrooms as a prevention for breast cancer and hurricanes devastating Florida. They even failed on some actual statistically likely hits, like a major storm hitting the gulf.

Final Thoughts

As we go into 2011, many, many people will look to alleged psychics, astrologers, mediums, etc. for forecasts about the year ahead. When I first started my blog in late 2008, I averaged about 10-25 hits/day. Then I did a parody of my own psychic and astrologic predictions for 2009, and my hit rate spiked by a factor of 5.

And yet, when we actually write down what these people say and we look at the misses along with the hits, we find that these people are basically full of you-know-what. They aren’t any more “psychic” than the average person making wishful forecasts.

The main difference between these professionals and the lay person is their vagueness. The C2C audience members were willing to make generally very specific predictions such as “Lake Tahoe is actually a volcano,” versus the professionals who know that being specific is to their detriment so will usually try to be more vague, such as “no major tsunami for quite awhile.”

Please let me know if you enjoyed this post – either by commenting and/or taking a moment to rank it with a star count just under the tags for the post. It took a lot of time to write these down and score them and I want to know if it’s worth doing for 2011.

Also, if I have made any mistakes in my scoring, please let me know and I will correct it ASAP.

Blog at WordPress.com.