Exposing PseudoAstronomy

April 9, 2014

The Pseudoscience of Whipping Cream


On Saturday, I had baked a honey ginger sponge cake. As a recipe with no added fat (the only non-negligible fat being from egg yolks), the cake is somewhat dry. So, I usually serve it with whipped cream and berries. Quite good, when it turns out right (about 10-20% of the time, it somehow separates and a weird rubbery eggy layer forms on the bottom).

So, I went to the ‘fridge and pulled out a half pint (why don’t they just call it a cup?) of cream and put it in the mixer – as I’ve done many times – with some powdered sugar and vanilla extract. Five minutes later, I had aerated cream. Ten minutes later, I had aerated cream. Why wouldn’t my cream whip!?

I scoured the internet. Yes, I had bought cream, not whole milk. Yes, I had chilled the bowl. Yes, I even tried sprinkling gelatin and making sure the mixer was oriented along a magnetic field line but not anti-parallel to the nearest lay line. Okay, not that last bit.

This is where I encountered some of the oddest pseudoscience in awhile, people trying to give advice or reasons why cream wouldn’t whip up. Among them were:

  • I should add a touch of lemon extract (an acid) to help break down bonds.
  • Add cream of tartar.
  • Add gelatin.
  • Use a copper bowl instead of stainless steel.
  • You can’t use Pasteurized – and especially not ultra pasteurized – cream.
  • Could be that the cows had an off-week.
  • You have to start the mixer at a low speed.
  • You can’t add the sugar at the beginning, you have to add it after it’s started to get fluffy.
  • You have to add sugar at the beginning (“it wont whip on it’s own”).
  • The bowl wasn’t cold enough (“They must be super cold.”).
  • You have to use the whisk attachment, not the egg beaters attachment.
  • Cream that had been frozen won’t whip.

I can, with practically 100% certainty, say that all of those are bulls–t. The real reason why my cream wouldn’t whip? I accidentally bought Table Cream instead of Whipping or Heavy Cream, meaning the butterfat content was 18% instead of 30-38%.

There is a very basic physics to this: Whipped cream is a fat-stabilized foam, meaning that you have to beat tiny air bubbles into the fat matrix of the cream. If there is not enough fat, then they cannot support the air, and the air will simply diffuse out. If there is enough fat, then they can trap the air. The one bit in the above list that was sort of correct is that colder fat will hold the air a little better (they’re somewhat stickier), so it does help a bit if your cream is just above freezing and you use a cold bowl to keep it colder longer. However, that is far from necessary. And, cream of tartar is a general stabilizer in whipped things, so that could help, but it is not a fat substitute — you still need to be able to whip the cream to begin with, the cream of tartar will just help keep it whipped once you’re done.

Everything else in that list is wrong. Acids can denature fats and so it can actually PREVENT the cream from whipping. Gelatin could work if you do it right — it has to be dissolved first in the liquid, and if you add it directly to the whipping cream, you’ll just get granules of gelatin. Pasteurization has nothing to do with changing the structure of the fat, same with the cows having an off-week or the cream having been frozen at some point in the past. The mixer starting at low speed just means it won’t splash as much and it will take longer. The two sugar claims are contradictory, and it doesn’t matter how you get the air in so long as it’s tiny air bubbles.

I guess I shouldn’t be surprised that pseudoscience crops up everywhere. But, I was amazed at the amount of it for something so simple as whipping cream, and that only on one site out of the dozen that I looked at did someone suggest ensuring that the cream was 30-38% butterfat, and that even if it says “cream” on the container, it can be below that and won’t whip.

December 1, 2013

Podcast Episode 94: Error and Uncertainty in Science


Terminology
Episodes. Hopefully not
A boring topic?

Another unconventional episode, this one focuses on terminology and what is meant by “accuracy,” “precision,” “error,” and “uncertainty” in science. And, especially, different sources and types of error.

The episode also – surprisingly given my time constraints right now – has all of the other usual segments: Q&A (about asteroid Apophis), Feedback about the Data Quality Act, and even a Puzzler! (Thanks to Leonard for sending in the puzzler for this episode.) And the obligatory Coast to Coast AM clip.

I also talk a bit about meetup plans in Australia, especially the Launceston Skeptics in the Pub on January 2, 2014, where I’ll be talking about the Lunar Ziggurat saga, not only from a skeptical point of view, but from an astronomical one as well as from a more social science point of view — dealing with “the crazies.” I have not yet started to write the presentation, but I personally think it’s fascinating, how it’s playing out in my head.

August 22, 2013

Podcast Episode 84: David Sereda’s Claims Clip Show, Part 2


David Sereda:
UFOs, quantum, new-age …
Let’s see what’s out there.

Whew. This one took a long time to put together and get through. Eleven clips from Coast to Coast with David Sereda making various claims and me explaining what parts of physics, astronomy, and general astronomy are incorrect.

The purpose of this episode is to move on from the background I gave in Part 1 to a very clip-y show with lots of different claims to explore. It’s an interesting episode, I think. Not only for style, but for content. Let me know what you think.

August 11, 2013

Podcast Episode 83: David Sereda’s Claims Clip Show, Part 1

Filed under: astronomy,new-age,physics,podcast — Stuart Robbins @ 10:16 pm
Tags: , , , , ,

David Sereda:
UFOs, quantum, new-age …
Let’s see what’s out there.

After realizing I had around 10 minutes of clips, lots already written, and more I wanted to write, this is a Part 1 of 2 mini-series on the claims of David Sereda.

The purpose of this episode is to provide a background into how Sereda went from a UFOlogist to a more generic new-ager with a few specific claims of his own. I then go into two of his main claims (of MANY that I’ll go more into next time) and wrap up with when giving your professional background becomes an argument from authority logical fallacy. Actually, almost everything that Sereda says is a Name that Logical Fallacy exercise.

This episode “required” me listening to approximately 40 hours of Coast to Coast AM. I took nearly 10,000 words of notes. I think I may take up drinking …

Again the new blog is WND Watch.

August 1, 2013

Podcast Episode 82: How to Design a Hyperdimensional Physics Experiment


Hyper-D physics
Could be tested with a watch.
So, is Hoagland right?

This is a longer episode, over 40 minutes long. Hopefully I didn’t drone too much. The episode is based on a blog post from last May, going through how one could design an experiment IF you assume EVERY SINGLE BIT of what Richard Hoagland says is true about hyperdimensional physics is true. It’s meticulous. Which is why it’s long. And I show why, quite literally, Richard’s data as they are currently presented are meaningless.

And now, seriously, the next episode will be about claims made by David Sereda on the structure of … stuff. He wasn’t this episode because I had about 40 hrs of Coast to Coast audio to listen to, and I have about 16 hrs left. So, yeah, next time.

BTW, link to the new blog is: WND Watch.

May 26, 2013

Properly Designing an Experiment to Measure Richard Hoagland’s Torsion Field, If It Were Real


Introduction

Warning: This is a long post, and it’s a rough draft for a future podcast episode. But it’s something I’ve wanted to write about for a long time.

Richard C. Hoagland has claimed now for at least a decade that there exists a “hyperdimensional torsion physics” which is based partly on spinning stuff. In his mind, the greater black governmental forces know about this and use it and keep it secret from us. It’s the key to “free energy” and anti-gravity and many other things.

Some of his strongest evidence is based on the frequency of a tuning fork inside a 40+ year-old watch. The purpose of this post is to assume Richard is correct, examine how an experiment using such a watch would need to be designed to provide evidence for his claim, and then to examine the evidence from it that Richard has provided.

Predictions

Richard has often stated, “Science is nothing if not predictions.” He’s also stated, “Science is nothing if not numbers” or sometimes “… data.” He is fairly correct in this statement, or at least the first and the last: For any hypothesis to be useful, it must be testable. It must make a prediction and that prediction must be tested.

Over the years, he has made innumerable claims about what his hyperdimensional or torsion physics “does” and predicts, though most of his predictions have come after the observation which invalidates them as predictions, or at least it renders them useless.

In particular, for this experiment we’re going to design, Hoagland has claimed that when a mass (such as a ball or planet) spins, it creates a “torsion field” that changes the inertia of other objects; he generally equates inertia with masss. Inertia isn’t actually mass, it’s the resistance of any object to a change in its motion. For our purposes here, we’ll even give him the benefit of the doubt, as either one is hypothetically testable with his tuning fork -based watch.

So, his specific claim, as I have seen it, is that the mass of an object will change based on its orientation relative to a massive spinning object. In other words, if you are oriented along the axis of spin of, say, Earth, then your mass will change one way (increase or decrease), and if you are oriented perpendicular to that axis of spin, your mass will change the other way.

Let’s simplify things even further from this more specific claim that complicates things: An object will change its mass in some direction in some orientation relative to a spinning object. This is part of the prediction we need to test.

According to Richard, the other part of this prediction is that to actually see this change, big spinning objects have to align in order to increase or decrease the mass from what we normally see. So, for example, if your baseball is on Earth, it has its mass based on it being on Earth as Earth is spinning the way it does. But, if, say, Venus aligns with the sun and transits (as it did back in July 2012), then the mass will change from what it normally is. Or, like during a solar eclipse. This is the other part of the prediction we need to test.

Hoagland also has other claims, like you have to be at sacred or “high energy” sites or somewhere “near” ±N·19.5° on Earth (where N is an integer multiple, and “near” means you can be ±8° or so from that multiple … so much for a specific prediction). For example, this apparently justifies his begging for people to pay for him and his significant other to go to Egypt last year during that Venus transit. Or taking his equipment on December 21, 2012 (when there wasn’t anything special alignment-wise…) to Chichen Itza, or going at some random time to Stonehenge. Yes, this is beginning to sound even more like magic, but for the purposes of our experimental design, let’s leave this part alone, at least for now.

Designing an Experiment: Equipment

“Expat” goes into much more detail on the specifics of Hoagland’s equipment, here.

To put it briefly, Richard uses a >40-year-old Accutron watch which has a small tuning fork in it that provides the basic unit of time for the watch. A tuning fork’s vibration rate (the frequency) is dependent on several things, including the length of the prongs, material used, and its moment of inertia. So, if mass changes, or its moment of inertia changes, then the tuning fork will change frequency. Meaning that the watch will run either fast or slow.

The second piece of equipment is a laptop computer, with diagnostic software that can read the frequency of the watch, and a connection to the watch.

So, we have the basic setup with a basic premise: During an astronomical alignment event, Hoagland’s Accutron watch should deviate from its expected frequency.

Designing an Experiment: Baseline

After we have designed an experiment and obtained equipment, usually the bulk of time is spent testing and calibrating that equipment. That’s what would need to be done in our hypothetical experiment here.

What this means is that we need to look up when there are no alignments that should affect our results, and then hook the watch up to the computer and measure the frequency. For a long time. Much longer than you expect to use the watch during the actual experiment.

You need to do this to understand how the equipment acts under normal circumstances. Without that, you can’t know if it acts differently – which is what your prediction is – during your time when you think it should. For example, let’s say that I only turn on a special fancy light over my special table when I have important people over for dinner. I notice that it flickers every time. I conclude that the light only flickers when there are important people there. Unfortunately, without the baseline measurement (turning on the light when there AREN’T important people there and seeing if it flickers), then my conclusion is invalidated.

So, in our hypothetical experiment, we test the watch. If it deviates at all from the manufacturer’s specifications during our baseline measurements (say, a 24-hour test), then we need to get a new one. Or we need to, say, make sure that the cables connecting the watch to the computer are connected properly and aren’t prone to surges or something else that could throw off the measurement. Make sure the software is working properly. Maybe try using a different computer.

In other words, we need to make sure that all of our equipment behaves as expected during our baseline measurements when nothing that our hypothesis predicts should affect it is going on.

Lots of statistical analyses would then be run to characterize the baseline behavior to compare with the later experiment and determine if it is statistically different.

Designing an Experiment: Running It

After we have working equipment, verified equipment, and a well documented and analyzed baseline, we then perform our actual measurements. Say, turn on our experiment during a solar eclipse. Or, if you want to follow the claim that we need to do this at some “high energy site,” then you’d need to take your equipment there and also get a baseline just to make sure that you haven’t broken your equipment in transit or messed up the setup.

Then, you gather your data. You run the experiment in the exact same way as you ran it before when doing your baseline.

Data Analysis

In our basic experiment, with our basic premise, the data analysis should be fairly easy.

Remember that the prediction is that, during the alignment event, the inertia of the tuning fork changes. Maybe it’s just me, but based on this premise, here’s what I would expect to see during the transit of Venus across the sun (if the hypothesis were true): The computer would record data identical to the baseline while Venus is away from the sun. When Venus makes contact with the sun’s disk, you would start to see a deviation that would increase until Venus’ disk is fully within the sun’s. Then, it would be at a steady, different value from the baseline for the duration of the transit. Or perhaps increase slowly until Venus is most inside the sun’s disk, then decreasing slightly until Venus’ limb makes contact with the sun’s. Then you’d get a rapid return to baseline as Venus’ disk exits the sun’s and you’d have a steady baseline thereafter.

If the change is very slight, this is where the statistics come in: You need to determine whether the variation you see is different enough from baseline to be considered a real effect. Let’s say, for example, during baseline measurements the average frequency is 360 Hz but that it deviates between 357 and 363 fairly often. So your range is 360±3 Hz (we’re simplifying things here). You do this for a very long time, getting, say, 24 hrs of data and you take a reading every 0.1 seconds, so you have 864,000 data points — a fairly large number from which to get a robust statistical average.

Now let’s say that from your location, the Venus transit lasted only 1 minute (they last many hours, but I’m using this as an example; bear with me). You have 600 data points. You get results that vary around 360 Hz, but it may trend to 365, or have a spike down to 300, and then flatten around 358. Do you have enough data points (only 600) to get a meaningful average? To get a meaningful average that you can say is statistically different enough from 360±3 Hz that this is a meaningful result?

In physics, we usually use a 5-sigma significance, meaning that, if 360±3 Hz represents our average ± 1 standard deviation (1 standard deviation means that about 68% of the datapoints will be in that range), then 5-sigma is 360±15 Hz. 5-sigma means that 99.999927% of the data will be in that range. This means that, to be a significant difference, we have to have an average during the Venus transit of, say, 400±10 Hz (where 1-sigma = 2 here, so 5-sigma = 10 Hz).

Instead, in the scenario I described two paragraphs ago, you’d probably get an average around 362 with a 5-sigma of ±50 Hz. This is NOT statistically significant. That means the null hypothesis – that there is no hyperdimensional physics -driven torsion field – must be concluded.

How could you get better statistics? You’d need different equipment. A turning fork that is more consistently 360 Hz (so better manufacturing = more expensive). A longer event. Maybe a faster reader so instead of reading the turning fork’s frequency every 0.1 seconds, you can read it every 0.01 seconds. Those are the only ways I can think of.

Repeat!

Despite what one may think or want, regardless of how extraordinary one’s results are, you have to repeat them. Over and over again. Preferably other, independent groups with independent equipment does the repetition. One experiment by one person does not a radical change in physics make.

What Does Richard Hoagland’s Data Look Like?

I’ve spent an excruciating >1700 words above explaining how you’d need to design and conduct an experiment with Richard’s apparatus and the basic form of his hypothesis. And why you have to do some of those more boring steps (like baseline measurements and statistical analysis).

To-date, Richard claims to have conducted about ten trials. One was at Coral Castle in Florida back I think during the 2004 Venus transit, another was outside Alburqueque in New Mexico during the 2012 Venus transit. Another in Hawai’i during a solar eclipse, another at Stonehenge during something, another in Mexico during December 21, 2012, etc., etc.

For all of these, he has neither stated that he has performed baseline measurements, nor has he presented any such baseline data. So, right off the bat, his results – whatever they are – are meaningless because we don’t know how his equipment behaves under normal circumstances … I don’t know if the light above my special table flickers at all times or just when those important people are over.

He also has not shown all his data, despite promises to do so.

Here’s one plot that he says was taken at Coral Castle during the Venus transit back in 2004, and it’s typical of the kinds of graphs he shows, though this one has a bit more wiggling going on:

My reading of this figure shows that his watch appears to have a baseline frequency of around 360 Hz, as it should. The average, however, states to be 361.611 Hz, though we don’t know how long that’s an average. The instability is 12.3 minutes per day, meaning it’s not a great watch.

On the actual graph, we see an apparent steady rate at around that 360 Hz, but we see spikes in the left half that deviate up to around ±0.3 Hz, and then we see a series of deviations during the time Venus is leaving the disk of the sun. But we see that the effect continues AFTER Venus is no longer in front of the sun. We see that it continues even more-so than during that change from Venus’ disk leaving the sun’s and more than when Venus was in front of the sun. We also see that the rough steady rate when Venus is in front of the sun is the same Hz as the apparent steady rate when Venus is off the sun’s disk.

From the scroll bar at the bottom, we can also see he’s not showing us all the data he collected, that he DID run it after Venus exited the sun’s disk, but we’re only seeing a 1.4-hr window.

Interestingly, we also have this:

Same location, same Accutron, some of the same time, same number of samples, same average rate, same last reading.

But DIFFERENT traces that are supposed to be happening at the same time! Maybe he mislabeled something. I’d prefer not to say that he faked his data. At the very least, this calls into question A LOT of his work in this.

What Conclusions Can Be Drawn from Richard’s Public Data?

None.

As I stated above, the lack of any baseline measurements automatically mean his data is useless because we don’t know how the watch acts under “normal” circumstances.

That aside, looking at his data that he has released in picture form (as in, we don’t have something like a time-series text file we can graph and run statistics on), it does not behave as one would predict from Richard’s hypothesis.

Other plots he presents from other events show even more steady state readings and then spikes up to 465 Hz at random times during or near when his special times are supposed to be. None of those are what one would predict from his hypothesis.

What Conclusions does Richard Draw from His Data?

“stunning ‘physics anomalies’”

“staggering technological implications of these simple torsion measurements — for REAL ‘free energy’ … for REAL ‘anti-gravity’ … for REAL ‘civilian inheritance of the riches of an entire solar system …’”

“These Enterprise Accutron results, painstakingly recorded in 2004, now overwhelmingly confirm– We DO live in a Hyperdimensional Solar System … with ALL those attendant implications.”

Et cetera.

Final Thoughts

First, as with all scientific endeavors, please let me know if I’ve left anything out or if I’ve made a mistake.

With that said, I’ll repeat that this is something I’ve been wanting to write about for a long time, and I finally had the three hours to do it (with some breaks). The craziness of claiming significant results from what – by all honest appearances – looks like a broken watch is the height of gall, ignorance, or some other words that I won’t say.

With Richard, I know he knows better because it’s been pointed out many times that what he needs to do to make his experiment valid.

But this also gets to a broader issue of a so-called “amateur scientist” who may wish to conduct an experiment to try to “prove” their non-mainstream idea: They have to do this extra stuff. Doing your experiment and getting weird results does not prove anything. This is also why doing science is hard and why maybe <5% of it is the glamorous press release and cool results. So much of it is testing, data gathering, and data reduction and then repeating over and over again.

Richard (and others) seem to think they can do a quick experiment and then that magically overturns centuries of "established" science. It doesn't.

February 8, 2013

Podcast #64: Quantum Nonsense


Episode 64: Quantum Nonsense, has been posted. It’s a combination of some new material and two previous blog posts. The topic is basically an intro to quantum mechanics and a discussion of how it is used and abused by pseudoscientists today. And, I branch away from Coast to Coast for other sources of audio clips! There’s also a puzzler and an addendum to the previous episode.

December 12, 2012

Being Pedantic Over Laser Colors on Law & Order: SVU


Introduction

This is a quick post about me being a crotchety young man. But, one of the founding ideas of this blog was to point out not only bad astronomy/physics/geology that I see out there on crazy blogs and Coast to Coast, but also what I see in the media, on TV, and in movies.

I’ve been going through and watching old Law & Order: SVU episodes. I enjoy the series and watched it with my mom during part of high school and sometimes when home from college. Now in it’s 14th season, I have a lot of catching up to do.

Setup

I was watching season 5 episode 1 last night while doing some other work on the side (you can decide for yourself if that’s an intended or unintended dangling participle). About 31 minutes into the episode, a lab tech makes a big deal about restoring a receipt and the detectives are hoping it can lead them to a woman who had been kidnapped. Only problem is the receipt is saturated with blood and unreadable … under normal lights.

Good Premise

The lab tech makes a big deal about how it’s illegible “to the naked eye, that’s why God invented lasers. Different frequencies reveal different inks.”

This premise is true. In fact, over Thanksgiving, I was back in Ohio visiting with my parents and went to the Cincinnati Museum Center’s special exhibit on the Dead Sea Scrolls. They had a side room on their digital imaging process for documenting the scrolls and, “using technology developed by NASA” (people who don’t think the space program has any practical applications …), they showed how the fragments are all being imaged with 12 different colors of light: seven visible, five IR. Here’s the video that I had seen on it, and here’s a shorter version that just shows one scroll under the different wavelengths.

What the video pointed out is that several letters were not visible due to burns or dirt. But, under different wavelengths of light, the dirt becomes transparent while the ink remains opaque and you can read them.

This is exactly why astronomers use different wavelengths of light to study different things. Anyway …

Law & Order: SVU -- Laser Colors Mistake

Law & Order: SVU — Laser Colors Mistake


Bad Science

In what was obviously done for visual interest for television, the lab tech then goes through and supposedly illuminates the receipt under three different laser colors to try to bring out text. First, she says she’s using 400 nm, which shows up as bright blue, tinging on violet. Then 500 nm, which shows up as a ruddy orange. Finally, 600 nm, which is a brilliant green and they can read it and go and find the girl.

Anyone shaking their head right now?

If not, let me explain the first issue: Those colors are wrong for those wavelengths. I should know — I just purchased four lasers, one each at 405 nm, 460 nm, 532 nm, and 650 nm. The colors for those are deep purple, bluish-violet, green, and deep red.

As in, if those wavelengths she stated were the actual colors, 400 nm would have been a very deep purple, bordering on invisibility to the human eye (edge of human vision is somewhere around 380-400 nm). 500 nm should have been a turquoise blue-green. 600 nm would have been a yellow bordering on red (yellow sodium lights in parking lots are at 589 nm).

A second problem is that lasers are not made at those wavelengths. I guess it’s theoretically possible, and there might be some very rare laser of which I’m not aware, and while there are a large variety of lasers out there, 400, 500, and 600 nm are not among them.

My third of three problems with this is that lasers generally make a dot. You can have spreaders and put in gratings and whatever to make broader patterns, but generally speaking, they’re dots. This was basically like a broad light. What she showed, and what should have been used, and what she should have simply said, are that she was using a diode. Speaking as someone who’s been kept awake by the diode power lights on a computer case, diodes create a broad illumination, not a tiny point of light.

But I guess lasers sound cooler.

Final Thoughts

Yes, pedantic, nit-picking, etc. Does this have any bearing on broader society? Probably not.

But, then again, there are two issues here. One is that she was simply wrong. Portraying bad science or getting the science wrong is … wrong. It shouldn’t have happened.

The second issue is that someone might pick up on this and think that’s the way things work — that those are the colors that correspond with those wavelengths. And then it could take a long time for them to unlearn it. For example, I had an 8th grade science teacher who claimed a kilometer was longer than a mile (among other things). Three years later, I was in AP music theory class and, as usual, we were doing nothing, so I was complaining with a senior about incompetent teachers at our school. And I brought up the 8th grade teacher and units of length. And she exclaimed that because she had been taught that, too, by the guy the year before I had him, it screwed her up for two whole years. It took 10th grade chemistry and a sit-down with the chemistry teacher before she got everything straightened out such that she now knows a centimeter is shorter than an inch.

Perhaps an extreme example, perhaps an example that further illustrates the tiny effect that this doesn’t have, but it’s stuck in my mind.

And there are a lot less useful blog posts out there, and I’m tired about hearing about 12/12/12 and 12/21/12.

P.S. If I had to guess, I’d say that the “400 nm” was around 470, the “500 nm” around 600 or so, and the “600 nm” right on that classic green laser color of 532 nm.

November 12, 2012

Falling through Earth

Filed under: general science,movies,physics — Stuart Robbins @ 8:18 pm
Tags: , , , , , ,

Just a quick post for today (busy busy here as usual, stuff should settle down a bit come December …). What would it be like to take an elevator trip through Earth from one side to the other?

Apparently, in the remake of the hilariously (poor science-)fiction movie Total Recall, the remake which I have not seen, there is a plot point of taking an elevator trip through Earth’s center from one side of the other. Apparently this is the only way to safely travel from one city to the other … I hope it’s not just some stupid thing that seems “cool” that serves no other purpose than to spend a budget on special effects.

Anyway, I came across a Wired article today where a physicist spends great detail explaining what would it actually be like to travel through Earth’s center. As with all great investigations when we have too much time on our hands, he even does numerical simulations, though it looks like he graphed in Excel … but I won’t hold it against him.

He shows several interesting things, including that the elevator would reach speeds no slower than 8 km/sec (around 5-6 miles/second). That’s really really fast. If he includes the higher density of Earth’s core, then you reach speeds up to 50% faster than that, even.

He also addresses the concept of weightlessness. This is something that all physics majors learn about in detail in Classical Mechanics classes (Physics I on steroids after your first and usually second year). But, I’ve always found it somewhat difficult to easily convey why, without drawing diagrams of circles and triangles, you would be weightless if you were stationary at Earth’s center. He goes through that in agonizing detail before letting you know that, actually, in the scenario in this version of Total Recall, you’d be weightless the whole time because you’re in free fall.

So, as I said, quick post for today, head over to Wired if you have a few minutes to reach about the physics of taking an elevator trip through the Center of the Earth.

August 2, 2012

Podcast Episode 46: Immanuel Velikovsky’s “Worlds in Collision”


The many times requested episode on Immanuel Velikovsky has arrived, and it’s arrived for the first anniversary of my podcast. Yup, the first episode, on the “dark side” of the moon, came out August 1, 2011. Hard to believe that it’s been a year.

This episode’s main segment is over 20 minutes long, and yet it’s an incredibly abridged episode discussing a distillation of his ideas from “Worlds in Collision,” his first book. I go over some of Velikovsky’s bio, the politics surrounding him when he introduced his book in 1950, and then a few of the lines of evidence he used plus several refutations of his argument.

This episode may seem a tad preachy at some points. It’s hard when talking about Velikovsky to address his evidence because there really is none for his claims, so I used it to discuss how one should and should not go about science, and how Velikovsky failed at it. Rather than using available observations and making his ideas, and then forming testable predictions from them, he instead threw out most branches of science and relied on scattered myths throughout the world for his evidence. Sorry, that ain’t how it’s done.

As the first anniversary episode, I go over some obligatory stats at the end. I’m relying on all of you to increase them for August 1, 2013. :)

Next Page »

The Rubric Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,136 other followers