Exposing PseudoAstronomy

October 29, 2014

The Deathbed Confession Phenomenon, and I’m Blogging at JREF’s Swift


As I continue to emerge from my seclusion from writing 5 grant proposals, a new development is that I am now included in the roster of bloggers on the James Randi Education Foundation (JREF) Swift blog. I’m not entirely sure how often I’ll be able to do it, but with 400-1000 -word posts and me already doing the weekly ATS 3-hour radio program Saturday nights, hopefully some time overlap can be arranged.

That said, my first post is about deathbed confessions, and why I find them unconvincing in terms of revealing anything outside the mainstream. I’m going to include the posts here in part because the Swift blog comments are closed. The posts here will not have been edited from what I send Sharon Hill (who does the actual posting) or have the images, so go there for the pretty pics.

Since this is my first Swift post, I wanted to give a brief introduction. I’m a self-termed “astro/geophysicist” with a Ph.D. in geophysics but a background more in astrophysics. Given my background, I tend to focus on pseudoscience and skepticism as applied to astronomy, geology, and physics. One regular activity of mine is that I’m a member of the studio of “ATS Live,” the premier three-hour live weekly show of the Above Top Secret website (one of the most popular conspiracy websites in the world); I’m the token skeptic.

On last weekend’s show (October 25, 2014), one of the topics we discussed was the deathbed confession of “Area 51 scientist,” Boyd Bushman. Within a few weeks of his death this past August, Mr. Bushman was recorded in numerous clips making various claims about how he worked on things such as antigravity, UFOs, and other classic pseudoscience claims related to what could be loosely termed, “new physics.”

I think this is an excellent example of why I find the “deathbed confession” phenomenon completely unconvincing, especially as related to paranormal-type claims.

People who want to believe tend to cite two reasons that deathbed confessions should be considered good evidence for their claims. First is the classic argument from authority, especially in the case of Boyd Bushman who’s reasonably well known in the UFO community and “was a retired Senior Scientist for Lockheed Martin.” He was awarded patents and defense contracts. Sounds impressive.

To be brief, the argument form authority is meaningless in terms of the veracity of the actual information; claims and information need to stand on their own and be verified regardless of the person who is making it. My favorite example is that Isaac Newton who (by most metrics) founded modern physics, believed in alchemy.

With this in mind, I don’t even need to start on the path of investigating Mr. Bushman’s claims of employment and background, which many people have called into question.

The second reason people tend to believe deathbed confessions is, “they have nothing to lose!” After all, the person making the deathbed confession is – barring something miraculous – dying. Being killed by the Men in Black at that point is no longer a threat because they’re about to die anyway.

While this certainly makes sense, there are plenty of other reasons why a deathbed confession would actually not be reliable. For one, at least for those who are older and close to death, senility can play a role. It is a normal part of aging, and for the record, Mr. Bushman was 78 when he died. I’m not claiming that senility played a role in this case, I’m merely raising it as a complicating factor of an older person’s testimony.

That aside, a deathbed confession can be a good time to solidify one’s reputation and use the deathbed confession phenomenon and the belief in its veracity to double-down on the claim to increase peoples’ belief in it.

The thinking could easily be, “People really believe that people are 100% honest on their deathbed, so I’m going to make sure I go out with a ‘bang’ and make my claims yet again. People who didn’t believe me before might this time because they’ll think I’m telling the truth ’cause I’m about to die.”

However, in addition to explaining why the common reasons to believe deathbed confession testimony are unconvincing, there’s a better reason why the testimony is not useful: They’re doing it wrong.

Let’s say I had a bunch of secrets of exotic physics and decided to do a deathbed confession. Here’s what I would say: “I’ve been working on antigravity and warp field physics for the last 50 years, in secret, with the US government.” Then, instead of showing photos of a spaceship or a blurry alien – if even that as opposed to just speaking to the camera – I would add: “And, here are the equations. Here is a diagram for how you build a device. Here is a working model. Here is exactly how you put everything together.”

In other words, it shouldn’t matter who I am, what my experience is, or what pretty (or ugly) picture I show. What I need to show is HOW to do it. Saying something doesn’t make it so. I need to give enough information for someone else to verify it and duplicate it. Otherwise, what’s the point? To show I’m smarter than everyone else and I’m just letting you know that before I die?

That’s why I find this whole deathbed confession thing unconvincing and, perhaps more importantly, unuseful: We have no more information than we had before. We have no way to verify any of the information claimed. No way to test or duplicate it. At *best*, we have another person claiming this stuff is real, and while he or she may be proven out with the passage of time, their “confession” contributed absolutely nothing to that advancement.

Until then, it’s no better than any other pseudoscientific claim.

July 21, 2014

Podcast Episode 116: The Electric Universe, Part 2, with Dr. Tom Bridgman


Sun models from the
Electric Universe. Do
The predictions work?

Practically on time comes part 2 of the two-part overview of the Electric Universe. This one is also a bit heavy with the math, so I recommend heading over to Tom’s site for more information and many, many more details.

So, um, with the deadline for a major grant program coming up in a few days, that’s it folks!

July 11, 2014

Podcast Episode 115: The Electric Universe, Part 1, with Dr. Tom Bridgman


Overview of the
Electric Universe! Been
A long time coming.

Happy TAM for all those who are here in Vegas, and attending TAM. As my own kick-off, since (for those who don’t know) today’s the first official day of stuff, we have Episode 115 of the podcast, the Electric Universe, Part 1. Part 2 will be out later this month where we’ll get more into the electric sun ideas, and why they fail. In other words, while this episode is an overview of the concept, and a lot of the history, the next episode is going to get more to specific examples of predictions and how the data fail to support them.

And, that’s about it. I’m writing this a day ahead of time, sitting in the Las Vegas airport for an hour.5 waiting for the airport shuttle so I don’t have to pay for a taxi. And I got 3 hours of sleep last night. So …

Oh, and the interview, for those who don’t read the title of the blog post, is with Dr. Tom Bridgman of the “Dealing with Creationism in Astronomy” blog.

June 2, 2014

Request for Questions: Electric Universe


In what promises to be as epic – or even more-so – than when the Flintsones met the Jetsons or if the Love Boat ever went to Fantasy Island, Exposing PseudoAstronomy will be meeting up with Crank Astronomy / Dealing with Creationism in Astronomy for a future episode or possibly two, or possibly more down the road.

I will be interviewing Tom Bridgman in a few weeks for at least one episode to be released in July. His area of expertise is the bane of my existence – electricity and magnetism – and he has talked a lot about the electric universe (or “EU”) idea on his blog before. I’ve gotten a lot of requests from listeners and readers to talk about this, but there’s no way I can do it justice.

I think Tom can.

We’re going to talk briefly about the history of EU and then probably about the “electric sun” phenomenon, but he and I want to open this up to any questions that you, the readers/listeners, may have for me to ask or topics for him to talk about. IF there are a lot, perhaps we’ll go longer and split into multiple episodes.

Please use the Comments here to put down topics/questions for discussion.

April 9, 2014

The Pseudoscience of Whipping Cream


On Saturday, I had baked a honey ginger sponge cake. As a recipe with no added fat (the only non-negligible fat being from egg yolks), the cake is somewhat dry. So, I usually serve it with whipped cream and berries. Quite good, when it turns out right (about 10-20% of the time, it somehow separates and a weird rubbery eggy layer forms on the bottom).

So, I went to the ‘fridge and pulled out a half pint (why don’t they just call it a cup?) of cream and put it in the mixer – as I’ve done many times – with some powdered sugar and vanilla extract. Five minutes later, I had aerated cream. Ten minutes later, I had aerated cream. Why wouldn’t my cream whip!?

I scoured the internet. Yes, I had bought cream, not whole milk. Yes, I had chilled the bowl. Yes, I even tried sprinkling gelatin and making sure the mixer was oriented along a magnetic field line but not anti-parallel to the nearest lay line. Okay, not that last bit.

This is where I encountered some of the oddest pseudoscience in awhile, people trying to give advice or reasons why cream wouldn’t whip up. Among them were:

  • I should add a touch of lemon extract (an acid) to help break down bonds.
  • Add cream of tartar.
  • Add gelatin.
  • Use a copper bowl instead of stainless steel.
  • You can’t use Pasteurized – and especially not ultra pasteurized – cream.
  • Could be that the cows had an off-week.
  • You have to start the mixer at a low speed.
  • You can’t add the sugar at the beginning, you have to add it after it’s started to get fluffy.
  • You have to add sugar at the beginning (“it wont whip on it’s own”).
  • The bowl wasn’t cold enough (“They must be super cold.”).
  • You have to use the whisk attachment, not the egg beaters attachment.
  • Cream that had been frozen won’t whip.

I can, with practically 100% certainty, say that all of those are bulls–t. The real reason why my cream wouldn’t whip? I accidentally bought Table Cream instead of Whipping or Heavy Cream, meaning the butterfat content was 18% instead of 30-38%.

There is a very basic physics to this: Whipped cream is a fat-stabilized foam, meaning that you have to beat tiny air bubbles into the fat matrix of the cream. If there is not enough fat, then they cannot support the air, and the air will simply diffuse out. If there is enough fat, then they can trap the air. The one bit in the above list that was sort of correct is that colder fat will hold the air a little better (they’re somewhat stickier), so it does help a bit if your cream is just above freezing and you use a cold bowl to keep it colder longer. However, that is far from necessary. And, cream of tartar is a general stabilizer in whipped things, so that could help, but it is not a fat substitute — you still need to be able to whip the cream to begin with, the cream of tartar will just help keep it whipped once you’re done.

Everything else in that list is wrong. Acids can denature fats and so it can actually PREVENT the cream from whipping. Gelatin could work if you do it right — it has to be dissolved first in the liquid, and if you add it directly to the whipping cream, you’ll just get granules of gelatin. Pasteurization has nothing to do with changing the structure of the fat, same with the cows having an off-week or the cream having been frozen at some point in the past. The mixer starting at low speed just means it won’t splash as much and it will take longer. The two sugar claims are contradictory, and it doesn’t matter how you get the air in so long as it’s tiny air bubbles.

I guess I shouldn’t be surprised that pseudoscience crops up everywhere. But, I was amazed at the amount of it for something so simple as whipping cream, and that only on one site out of the dozen that I looked at did someone suggest ensuring that the cream was 30-38% butterfat, and that even if it says “cream” on the container, it can be below that and won’t whip.

December 1, 2013

Podcast Episode 94: Error and Uncertainty in Science


Terminology
Episodes. Hopefully not
A boring topic?

Another unconventional episode, this one focuses on terminology and what is meant by “accuracy,” “precision,” “error,” and “uncertainty” in science. And, especially, different sources and types of error.

The episode also – surprisingly given my time constraints right now – has all of the other usual segments: Q&A (about asteroid Apophis), Feedback about the Data Quality Act, and even a Puzzler! (Thanks to Leonard for sending in the puzzler for this episode.) And the obligatory Coast to Coast AM clip.

I also talk a bit about meetup plans in Australia, especially the Launceston Skeptics in the Pub on January 2, 2014, where I’ll be talking about the Lunar Ziggurat saga, not only from a skeptical point of view, but from an astronomical one as well as from a more social science point of view — dealing with “the crazies.” I have not yet started to write the presentation, but I personally think it’s fascinating, how it’s playing out in my head.

August 22, 2013

Podcast Episode 84: David Sereda’s Claims Clip Show, Part 2


David Sereda:
UFOs, quantum, new-age …
Let’s see what’s out there.

Whew. This one took a long time to put together and get through. Eleven clips from Coast to Coast with David Sereda making various claims and me explaining what parts of physics, astronomy, and general astronomy are incorrect.

The purpose of this episode is to move on from the background I gave in Part 1 to a very clip-y show with lots of different claims to explore. It’s an interesting episode, I think. Not only for style, but for content. Let me know what you think.

August 11, 2013

Podcast Episode 83: David Sereda’s Claims Clip Show, Part 1

Filed under: astronomy,new-age,physics,podcast — Stuart Robbins @ 10:16 pm
Tags: , , , , ,

David Sereda:
UFOs, quantum, new-age …
Let’s see what’s out there.

After realizing I had around 10 minutes of clips, lots already written, and more I wanted to write, this is a Part 1 of 2 mini-series on the claims of David Sereda.

The purpose of this episode is to provide a background into how Sereda went from a UFOlogist to a more generic new-ager with a few specific claims of his own. I then go into two of his main claims (of MANY that I’ll go more into next time) and wrap up with when giving your professional background becomes an argument from authority logical fallacy. Actually, almost everything that Sereda says is a Name that Logical Fallacy exercise.

This episode “required” me listening to approximately 40 hours of Coast to Coast AM. I took nearly 10,000 words of notes. I think I may take up drinking …

Again the new blog is WND Watch.

August 1, 2013

Podcast Episode 82: How to Design a Hyperdimensional Physics Experiment


Hyper-D physics
Could be tested with a watch.
So, is Hoagland right?

This is a longer episode, over 40 minutes long. Hopefully I didn’t drone too much. The episode is based on a blog post from last May, going through how one could design an experiment IF you assume EVERY SINGLE BIT of what Richard Hoagland says is true about hyperdimensional physics is true. It’s meticulous. Which is why it’s long. And I show why, quite literally, Richard’s data as they are currently presented are meaningless.

And now, seriously, the next episode will be about claims made by David Sereda on the structure of … stuff. He wasn’t this episode because I had about 40 hrs of Coast to Coast audio to listen to, and I have about 16 hrs left. So, yeah, next time.

BTW, link to the new blog is: WND Watch.

May 26, 2013

Properly Designing an Experiment to Measure Richard Hoagland’s Torsion Field, If It Were Real


Introduction

Warning: This is a long post, and it’s a rough draft for a future podcast episode. But it’s something I’ve wanted to write about for a long time.

Richard C. Hoagland has claimed now for at least a decade that there exists a “hyperdimensional torsion physics” which is based partly on spinning stuff. In his mind, the greater black governmental forces know about this and use it and keep it secret from us. It’s the key to “free energy” and anti-gravity and many other things.

Some of his strongest evidence is based on the frequency of a tuning fork inside a 40+ year-old watch. The purpose of this post is to assume Richard is correct, examine how an experiment using such a watch would need to be designed to provide evidence for his claim, and then to examine the evidence from it that Richard has provided.

Predictions

Richard has often stated, “Science is nothing if not predictions.” He’s also stated, “Science is nothing if not numbers” or sometimes “… data.” He is fairly correct in this statement, or at least the first and the last: For any hypothesis to be useful, it must be testable. It must make a prediction and that prediction must be tested.

Over the years, he has made innumerable claims about what his hyperdimensional or torsion physics “does” and predicts, though most of his predictions have come after the observation which invalidates them as predictions, or at least it renders them useless.

In particular, for this experiment we’re going to design, Hoagland has claimed that when a mass (such as a ball or planet) spins, it creates a “torsion field” that changes the inertia of other objects; he generally equates inertia with masss. Inertia isn’t actually mass, it’s the resistance of any object to a change in its motion. For our purposes here, we’ll even give him the benefit of the doubt, as either one is hypothetically testable with his tuning fork -based watch.

So, his specific claim, as I have seen it, is that the mass of an object will change based on its orientation relative to a massive spinning object. In other words, if you are oriented along the axis of spin of, say, Earth, then your mass will change one way (increase or decrease), and if you are oriented perpendicular to that axis of spin, your mass will change the other way.

Let’s simplify things even further from this more specific claim that complicates things: An object will change its mass in some direction in some orientation relative to a spinning object. This is part of the prediction we need to test.

According to Richard, the other part of this prediction is that to actually see this change, big spinning objects have to align in order to increase or decrease the mass from what we normally see. So, for example, if your baseball is on Earth, it has its mass based on it being on Earth as Earth is spinning the way it does. But, if, say, Venus aligns with the sun and transits (as it did back in July 2012), then the mass will change from what it normally is. Or, like during a solar eclipse. This is the other part of the prediction we need to test.

Hoagland also has other claims, like you have to be at sacred or “high energy” sites or somewhere “near” ±N·19.5° on Earth (where N is an integer multiple, and “near” means you can be ±8° or so from that multiple … so much for a specific prediction). For example, this apparently justifies his begging for people to pay for him and his significant other to go to Egypt last year during that Venus transit. Or taking his equipment on December 21, 2012 (when there wasn’t anything special alignment-wise…) to Chichen Itza, or going at some random time to Stonehenge. Yes, this is beginning to sound even more like magic, but for the purposes of our experimental design, let’s leave this part alone, at least for now.

Designing an Experiment: Equipment

“Expat” goes into much more detail on the specifics of Hoagland’s equipment, here.

To put it briefly, Richard uses a >40-year-old Accutron watch which has a small tuning fork in it that provides the basic unit of time for the watch. A tuning fork’s vibration rate (the frequency) is dependent on several things, including the length of the prongs, material used, and its moment of inertia. So, if mass changes, or its moment of inertia changes, then the tuning fork will change frequency. Meaning that the watch will run either fast or slow.

The second piece of equipment is a laptop computer, with diagnostic software that can read the frequency of the watch, and a connection to the watch.

So, we have the basic setup with a basic premise: During an astronomical alignment event, Hoagland’s Accutron watch should deviate from its expected frequency.

Designing an Experiment: Baseline

After we have designed an experiment and obtained equipment, usually the bulk of time is spent testing and calibrating that equipment. That’s what would need to be done in our hypothetical experiment here.

What this means is that we need to look up when there are no alignments that should affect our results, and then hook the watch up to the computer and measure the frequency. For a long time. Much longer than you expect to use the watch during the actual experiment.

You need to do this to understand how the equipment acts under normal circumstances. Without that, you can’t know if it acts differently – which is what your prediction is – during your time when you think it should. For example, let’s say that I only turn on a special fancy light over my special table when I have important people over for dinner. I notice that it flickers every time. I conclude that the light only flickers when there are important people there. Unfortunately, without the baseline measurement (turning on the light when there AREN’T important people there and seeing if it flickers), then my conclusion is invalidated.

So, in our hypothetical experiment, we test the watch. If it deviates at all from the manufacturer’s specifications during our baseline measurements (say, a 24-hour test), then we need to get a new one. Or we need to, say, make sure that the cables connecting the watch to the computer are connected properly and aren’t prone to surges or something else that could throw off the measurement. Make sure the software is working properly. Maybe try using a different computer.

In other words, we need to make sure that all of our equipment behaves as expected during our baseline measurements when nothing that our hypothesis predicts should affect it is going on.

Lots of statistical analyses would then be run to characterize the baseline behavior to compare with the later experiment and determine if it is statistically different.

Designing an Experiment: Running It

After we have working equipment, verified equipment, and a well documented and analyzed baseline, we then perform our actual measurements. Say, turn on our experiment during a solar eclipse. Or, if you want to follow the claim that we need to do this at some “high energy site,” then you’d need to take your equipment there and also get a baseline just to make sure that you haven’t broken your equipment in transit or messed up the setup.

Then, you gather your data. You run the experiment in the exact same way as you ran it before when doing your baseline.

Data Analysis

In our basic experiment, with our basic premise, the data analysis should be fairly easy.

Remember that the prediction is that, during the alignment event, the inertia of the tuning fork changes. Maybe it’s just me, but based on this premise, here’s what I would expect to see during the transit of Venus across the sun (if the hypothesis were true): The computer would record data identical to the baseline while Venus is away from the sun. When Venus makes contact with the sun’s disk, you would start to see a deviation that would increase until Venus’ disk is fully within the sun’s. Then, it would be at a steady, different value from the baseline for the duration of the transit. Or perhaps increase slowly until Venus is most inside the sun’s disk, then decreasing slightly until Venus’ limb makes contact with the sun’s. Then you’d get a rapid return to baseline as Venus’ disk exits the sun’s and you’d have a steady baseline thereafter.

If the change is very slight, this is where the statistics come in: You need to determine whether the variation you see is different enough from baseline to be considered a real effect. Let’s say, for example, during baseline measurements the average frequency is 360 Hz but that it deviates between 357 and 363 fairly often. So your range is 360±3 Hz (we’re simplifying things here). You do this for a very long time, getting, say, 24 hrs of data and you take a reading every 0.1 seconds, so you have 864,000 data points — a fairly large number from which to get a robust statistical average.

Now let’s say that from your location, the Venus transit lasted only 1 minute (they last many hours, but I’m using this as an example; bear with me). You have 600 data points. You get results that vary around 360 Hz, but it may trend to 365, or have a spike down to 300, and then flatten around 358. Do you have enough data points (only 600) to get a meaningful average? To get a meaningful average that you can say is statistically different enough from 360±3 Hz that this is a meaningful result?

In physics, we usually use a 5-sigma significance, meaning that, if 360±3 Hz represents our average ± 1 standard deviation (1 standard deviation means that about 68% of the datapoints will be in that range), then 5-sigma is 360±15 Hz. 5-sigma means that 99.999927% of the data will be in that range. This means that, to be a significant difference, we have to have an average during the Venus transit of, say, 400±10 Hz (where 1-sigma = 2 here, so 5-sigma = 10 Hz).

Instead, in the scenario I described two paragraphs ago, you’d probably get an average around 362 with a 5-sigma of ±50 Hz. This is NOT statistically significant. That means the null hypothesis – that there is no hyperdimensional physics -driven torsion field – must be concluded.

How could you get better statistics? You’d need different equipment. A turning fork that is more consistently 360 Hz (so better manufacturing = more expensive). A longer event. Maybe a faster reader so instead of reading the turning fork’s frequency every 0.1 seconds, you can read it every 0.01 seconds. Those are the only ways I can think of.

Repeat!

Despite what one may think or want, regardless of how extraordinary one’s results are, you have to repeat them. Over and over again. Preferably other, independent groups with independent equipment does the repetition. One experiment by one person does not a radical change in physics make.

What Does Richard Hoagland’s Data Look Like?

I’ve spent an excruciating >1700 words above explaining how you’d need to design and conduct an experiment with Richard’s apparatus and the basic form of his hypothesis. And why you have to do some of those more boring steps (like baseline measurements and statistical analysis).

To-date, Richard claims to have conducted about ten trials. One was at Coral Castle in Florida back I think during the 2004 Venus transit, another was outside Alburqueque in New Mexico during the 2012 Venus transit. Another in Hawai’i during a solar eclipse, another at Stonehenge during something, another in Mexico during December 21, 2012, etc., etc.

For all of these, he has neither stated that he has performed baseline measurements, nor has he presented any such baseline data. So, right off the bat, his results – whatever they are – are meaningless because we don’t know how his equipment behaves under normal circumstances … I don’t know if the light above my special table flickers at all times or just when those important people are over.

He also has not shown all his data, despite promises to do so.

Here’s one plot that he says was taken at Coral Castle during the Venus transit back in 2004, and it’s typical of the kinds of graphs he shows, though this one has a bit more wiggling going on:

My reading of this figure shows that his watch appears to have a baseline frequency of around 360 Hz, as it should. The average, however, states to be 361.611 Hz, though we don’t know how long that’s an average. The instability is 12.3 minutes per day, meaning it’s not a great watch.

On the actual graph, we see an apparent steady rate at around that 360 Hz, but we see spikes in the left half that deviate up to around ±0.3 Hz, and then we see a series of deviations during the time Venus is leaving the disk of the sun. But we see that the effect continues AFTER Venus is no longer in front of the sun. We see that it continues even more-so than during that change from Venus’ disk leaving the sun’s and more than when Venus was in front of the sun. We also see that the rough steady rate when Venus is in front of the sun is the same Hz as the apparent steady rate when Venus is off the sun’s disk.

From the scroll bar at the bottom, we can also see he’s not showing us all the data he collected, that he DID run it after Venus exited the sun’s disk, but we’re only seeing a 1.4-hr window.

Interestingly, we also have this:

Same location, same Accutron, some of the same time, same number of samples, same average rate, same last reading.

But DIFFERENT traces that are supposed to be happening at the same time! Maybe he mislabeled something. I’d prefer not to say that he faked his data. At the very least, this calls into question A LOT of his work in this.

What Conclusions Can Be Drawn from Richard’s Public Data?

None.

As I stated above, the lack of any baseline measurements automatically mean his data is useless because we don’t know how the watch acts under “normal” circumstances.

That aside, looking at his data that he has released in picture form (as in, we don’t have something like a time-series text file we can graph and run statistics on), it does not behave as one would predict from Richard’s hypothesis.

Other plots he presents from other events show even more steady state readings and then spikes up to 465 Hz at random times during or near when his special times are supposed to be. None of those are what one would predict from his hypothesis.

What Conclusions does Richard Draw from His Data?

“stunning ‘physics anomalies'”

“staggering technological implications of these simple torsion measurements — for REAL ‘free energy’ … for REAL ‘anti-gravity’ … for REAL ‘civilian inheritance of the riches of an entire solar system …'”

“These Enterprise Accutron results, painstakingly recorded in 2004, now overwhelmingly confirm– We DO live in a Hyperdimensional Solar System … with ALL those attendant implications.”

Et cetera.

Final Thoughts

First, as with all scientific endeavors, please let me know if I’ve left anything out or if I’ve made a mistake.

With that said, I’ll repeat that this is something I’ve been wanting to write about for a long time, and I finally had the three hours to do it (with some breaks). The craziness of claiming significant results from what – by all honest appearances – looks like a broken watch is the height of gall, ignorance, or some other words that I won’t say.

With Richard, I know he knows better because it’s been pointed out many times that what he needs to do to make his experiment valid.

But this also gets to a broader issue of a so-called “amateur scientist” who may wish to conduct an experiment to try to “prove” their non-mainstream idea: They have to do this extra stuff. Doing your experiment and getting weird results does not prove anything. This is also why doing science is hard and why maybe <5% of it is the glamorous press release and cool results. So much of it is testing, data gathering, and data reduction and then repeating over and over again.

Richard (and others) seem to think they can do a quick experiment and then that magically overturns centuries of "established" science. It doesn't.

Next Page »

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,379 other followers