Exposing PseudoAstronomy

October 5, 2013

Podcast Episode 88: Is Phobos Hollow?


Mars’ odd moon Phobos:
Is it hollow? Is it not?
Hoagland thinks he knows.

It’s a bit late (but back-dated), but as I ‘splained, I was co-leading a field trip to Yellowstone National Park. And then the government shut down. This episode is near the classic style with a basic dissection of the claim and the claimed evidence for Phobos being hollow.

The episode also has Q&A and feedback. The next episode has already been recorded and is an interview with Robin Canup about lunar formation models (she even has her own Wikipedia page!!!)

The episode after that is slated to be about the claim that alleged UFO-contactee Billy Meier knew about Jupiter’s rings before scientists did. I expect the comments on that post might fill up, but I’ll note now that NONE will be allowed through on this post unless someone has a suggestion for a puzzler on it.

August 1, 2013

Podcast Episode 82: How to Design a Hyperdimensional Physics Experiment


Hyper-D physics
Could be tested with a watch.
So, is Hoagland right?

This is a longer episode, over 40 minutes long. Hopefully I didn’t drone too much. The episode is based on a blog post from last May, going through how one could design an experiment IF you assume EVERY SINGLE BIT of what Richard Hoagland says is true about hyperdimensional physics is true. It’s meticulous. Which is why it’s long. And I show why, quite literally, Richard’s data as they are currently presented are meaningless.

And now, seriously, the next episode will be about claims made by David Sereda on the structure of … stuff. He wasn’t this episode because I had about 40 hrs of Coast to Coast audio to listen to, and I have about 16 hrs left. So, yeah, next time.

BTW, link to the new blog is: WND Watch.

May 26, 2013

Properly Designing an Experiment to Measure Richard Hoagland’s Torsion Field, If It Were Real


Introduction

Warning: This is a long post, and it’s a rough draft for a future podcast episode. But it’s something I’ve wanted to write about for a long time.

Richard C. Hoagland has claimed now for at least a decade that there exists a “hyperdimensional torsion physics” which is based partly on spinning stuff. In his mind, the greater black governmental forces know about this and use it and keep it secret from us. It’s the key to “free energy” and anti-gravity and many other things.

Some of his strongest evidence is based on the frequency of a tuning fork inside a 40+ year-old watch. The purpose of this post is to assume Richard is correct, examine how an experiment using such a watch would need to be designed to provide evidence for his claim, and then to examine the evidence from it that Richard has provided.

Predictions

Richard has often stated, “Science is nothing if not predictions.” He’s also stated, “Science is nothing if not numbers” or sometimes “… data.” He is fairly correct in this statement, or at least the first and the last: For any hypothesis to be useful, it must be testable. It must make a prediction and that prediction must be tested.

Over the years, he has made innumerable claims about what his hyperdimensional or torsion physics “does” and predicts, though most of his predictions have come after the observation which invalidates them as predictions, or at least it renders them useless.

In particular, for this experiment we’re going to design, Hoagland has claimed that when a mass (such as a ball or planet) spins, it creates a “torsion field” that changes the inertia of other objects; he generally equates inertia with masss. Inertia isn’t actually mass, it’s the resistance of any object to a change in its motion. For our purposes here, we’ll even give him the benefit of the doubt, as either one is hypothetically testable with his tuning fork -based watch.

So, his specific claim, as I have seen it, is that the mass of an object will change based on its orientation relative to a massive spinning object. In other words, if you are oriented along the axis of spin of, say, Earth, then your mass will change one way (increase or decrease), and if you are oriented perpendicular to that axis of spin, your mass will change the other way.

Let’s simplify things even further from this more specific claim that complicates things: An object will change its mass in some direction in some orientation relative to a spinning object. This is part of the prediction we need to test.

According to Richard, the other part of this prediction is that to actually see this change, big spinning objects have to align in order to increase or decrease the mass from what we normally see. So, for example, if your baseball is on Earth, it has its mass based on it being on Earth as Earth is spinning the way it does. But, if, say, Venus aligns with the sun and transits (as it did back in July 2012), then the mass will change from what it normally is. Or, like during a solar eclipse. This is the other part of the prediction we need to test.

Hoagland also has other claims, like you have to be at sacred or “high energy” sites or somewhere “near” ±N·19.5° on Earth (where N is an integer multiple, and “near” means you can be ±8° or so from that multiple … so much for a specific prediction). For example, this apparently justifies his begging for people to pay for him and his significant other to go to Egypt last year during that Venus transit. Or taking his equipment on December 21, 2012 (when there wasn’t anything special alignment-wise…) to Chichen Itza, or going at some random time to Stonehenge. Yes, this is beginning to sound even more like magic, but for the purposes of our experimental design, let’s leave this part alone, at least for now.

Designing an Experiment: Equipment

“Expat” goes into much more detail on the specifics of Hoagland’s equipment, here.

To put it briefly, Richard uses a >40-year-old Accutron watch which has a small tuning fork in it that provides the basic unit of time for the watch. A tuning fork’s vibration rate (the frequency) is dependent on several things, including the length of the prongs, material used, and its moment of inertia. So, if mass changes, or its moment of inertia changes, then the tuning fork will change frequency. Meaning that the watch will run either fast or slow.

The second piece of equipment is a laptop computer, with diagnostic software that can read the frequency of the watch, and a connection to the watch.

So, we have the basic setup with a basic premise: During an astronomical alignment event, Hoagland’s Accutron watch should deviate from its expected frequency.

Designing an Experiment: Baseline

After we have designed an experiment and obtained equipment, usually the bulk of time is spent testing and calibrating that equipment. That’s what would need to be done in our hypothetical experiment here.

What this means is that we need to look up when there are no alignments that should affect our results, and then hook the watch up to the computer and measure the frequency. For a long time. Much longer than you expect to use the watch during the actual experiment.

You need to do this to understand how the equipment acts under normal circumstances. Without that, you can’t know if it acts differently – which is what your prediction is – during your time when you think it should. For example, let’s say that I only turn on a special fancy light over my special table when I have important people over for dinner. I notice that it flickers every time. I conclude that the light only flickers when there are important people there. Unfortunately, without the baseline measurement (turning on the light when there AREN’T important people there and seeing if it flickers), then my conclusion is invalidated.

So, in our hypothetical experiment, we test the watch. If it deviates at all from the manufacturer’s specifications during our baseline measurements (say, a 24-hour test), then we need to get a new one. Or we need to, say, make sure that the cables connecting the watch to the computer are connected properly and aren’t prone to surges or something else that could throw off the measurement. Make sure the software is working properly. Maybe try using a different computer.

In other words, we need to make sure that all of our equipment behaves as expected during our baseline measurements when nothing that our hypothesis predicts should affect it is going on.

Lots of statistical analyses would then be run to characterize the baseline behavior to compare with the later experiment and determine if it is statistically different.

Designing an Experiment: Running It

After we have working equipment, verified equipment, and a well documented and analyzed baseline, we then perform our actual measurements. Say, turn on our experiment during a solar eclipse. Or, if you want to follow the claim that we need to do this at some “high energy site,” then you’d need to take your equipment there and also get a baseline just to make sure that you haven’t broken your equipment in transit or messed up the setup.

Then, you gather your data. You run the experiment in the exact same way as you ran it before when doing your baseline.

Data Analysis

In our basic experiment, with our basic premise, the data analysis should be fairly easy.

Remember that the prediction is that, during the alignment event, the inertia of the tuning fork changes. Maybe it’s just me, but based on this premise, here’s what I would expect to see during the transit of Venus across the sun (if the hypothesis were true): The computer would record data identical to the baseline while Venus is away from the sun. When Venus makes contact with the sun’s disk, you would start to see a deviation that would increase until Venus’ disk is fully within the sun’s. Then, it would be at a steady, different value from the baseline for the duration of the transit. Or perhaps increase slowly until Venus is most inside the sun’s disk, then decreasing slightly until Venus’ limb makes contact with the sun’s. Then you’d get a rapid return to baseline as Venus’ disk exits the sun’s and you’d have a steady baseline thereafter.

If the change is very slight, this is where the statistics come in: You need to determine whether the variation you see is different enough from baseline to be considered a real effect. Let’s say, for example, during baseline measurements the average frequency is 360 Hz but that it deviates between 357 and 363 fairly often. So your range is 360±3 Hz (we’re simplifying things here). You do this for a very long time, getting, say, 24 hrs of data and you take a reading every 0.1 seconds, so you have 864,000 data points — a fairly large number from which to get a robust statistical average.

Now let’s say that from your location, the Venus transit lasted only 1 minute (they last many hours, but I’m using this as an example; bear with me). You have 600 data points. You get results that vary around 360 Hz, but it may trend to 365, or have a spike down to 300, and then flatten around 358. Do you have enough data points (only 600) to get a meaningful average? To get a meaningful average that you can say is statistically different enough from 360±3 Hz that this is a meaningful result?

In physics, we usually use a 5-sigma significance, meaning that, if 360±3 Hz represents our average ± 1 standard deviation (1 standard deviation means that about 68% of the datapoints will be in that range), then 5-sigma is 360±15 Hz. 5-sigma means that 99.999927% of the data will be in that range. This means that, to be a significant difference, we have to have an average during the Venus transit of, say, 400±10 Hz (where 1-sigma = 2 here, so 5-sigma = 10 Hz).

Instead, in the scenario I described two paragraphs ago, you’d probably get an average around 362 with a 5-sigma of ±50 Hz. This is NOT statistically significant. That means the null hypothesis – that there is no hyperdimensional physics -driven torsion field – must be concluded.

How could you get better statistics? You’d need different equipment. A turning fork that is more consistently 360 Hz (so better manufacturing = more expensive). A longer event. Maybe a faster reader so instead of reading the turning fork’s frequency every 0.1 seconds, you can read it every 0.01 seconds. Those are the only ways I can think of.

Repeat!

Despite what one may think or want, regardless of how extraordinary one’s results are, you have to repeat them. Over and over again. Preferably other, independent groups with independent equipment does the repetition. One experiment by one person does not a radical change in physics make.

What Does Richard Hoagland’s Data Look Like?

I’ve spent an excruciating >1700 words above explaining how you’d need to design and conduct an experiment with Richard’s apparatus and the basic form of his hypothesis. And why you have to do some of those more boring steps (like baseline measurements and statistical analysis).

To-date, Richard claims to have conducted about ten trials. One was at Coral Castle in Florida back I think during the 2004 Venus transit, another was outside Alburqueque in New Mexico during the 2012 Venus transit. Another in Hawai’i during a solar eclipse, another at Stonehenge during something, another in Mexico during December 21, 2012, etc., etc.

For all of these, he has neither stated that he has performed baseline measurements, nor has he presented any such baseline data. So, right off the bat, his results – whatever they are – are meaningless because we don’t know how his equipment behaves under normal circumstances … I don’t know if the light above my special table flickers at all times or just when those important people are over.

He also has not shown all his data, despite promises to do so.

Here’s one plot that he says was taken at Coral Castle during the Venus transit back in 2004, and it’s typical of the kinds of graphs he shows, though this one has a bit more wiggling going on:

My reading of this figure shows that his watch appears to have a baseline frequency of around 360 Hz, as it should. The average, however, states to be 361.611 Hz, though we don’t know how long that’s an average. The instability is 12.3 minutes per day, meaning it’s not a great watch.

On the actual graph, we see an apparent steady rate at around that 360 Hz, but we see spikes in the left half that deviate up to around ±0.3 Hz, and then we see a series of deviations during the time Venus is leaving the disk of the sun. But we see that the effect continues AFTER Venus is no longer in front of the sun. We see that it continues even more-so than during that change from Venus’ disk leaving the sun’s and more than when Venus was in front of the sun. We also see that the rough steady rate when Venus is in front of the sun is the same Hz as the apparent steady rate when Venus is off the sun’s disk.

From the scroll bar at the bottom, we can also see he’s not showing us all the data he collected, that he DID run it after Venus exited the sun’s disk, but we’re only seeing a 1.4-hr window.

Interestingly, we also have this:

Same location, same Accutron, some of the same time, same number of samples, same average rate, same last reading.

But DIFFERENT traces that are supposed to be happening at the same time! Maybe he mislabeled something. I’d prefer not to say that he faked his data. At the very least, this calls into question A LOT of his work in this.

What Conclusions Can Be Drawn from Richard’s Public Data?

None.

As I stated above, the lack of any baseline measurements automatically mean his data is useless because we don’t know how the watch acts under “normal” circumstances.

That aside, looking at his data that he has released in picture form (as in, we don’t have something like a time-series text file we can graph and run statistics on), it does not behave as one would predict from Richard’s hypothesis.

Other plots he presents from other events show even more steady state readings and then spikes up to 465 Hz at random times during or near when his special times are supposed to be. None of those are what one would predict from his hypothesis.

What Conclusions does Richard Draw from His Data?

“stunning ‘physics anomalies’”

“staggering technological implications of these simple torsion measurements — for REAL ‘free energy’ … for REAL ‘anti-gravity’ … for REAL ‘civilian inheritance of the riches of an entire solar system …’”

“These Enterprise Accutron results, painstakingly recorded in 2004, now overwhelmingly confirm– We DO live in a Hyperdimensional Solar System … with ALL those attendant implications.”

Et cetera.

Final Thoughts

First, as with all scientific endeavors, please let me know if I’ve left anything out or if I’ve made a mistake.

With that said, I’ll repeat that this is something I’ve been wanting to write about for a long time, and I finally had the three hours to do it (with some breaks). The craziness of claiming significant results from what – by all honest appearances – looks like a broken watch is the height of gall, ignorance, or some other words that I won’t say.

With Richard, I know he knows better because it’s been pointed out many times that what he needs to do to make his experiment valid.

But this also gets to a broader issue of a so-called “amateur scientist” who may wish to conduct an experiment to try to “prove” their non-mainstream idea: They have to do this extra stuff. Doing your experiment and getting weird results does not prove anything. This is also why doing science is hard and why maybe <5% of it is the glamorous press release and cool results. So much of it is testing, data gathering, and data reduction and then repeating over and over again.

Richard (and others) seem to think they can do a quick experiment and then that magically overturns centuries of "established" science. It doesn't.

March 16, 2013

Podcast #68: Expat in Hoaglandia – A Fantasia of NASA Conspiracies


This episode is just 6 seconds short of a full hour. I interview Expat – who was my first guest ever back in Episode 10 – about numerous political and technological conspiracies of Richard Hoagland as generally applied to NASA. I learned quite a bit during this interview, and I hope that you do, too, and find it interesting as well.

There’s a quick New News item at the end, but all the other segments are skipped so as not to detract from Expat.

Upcoming episodes that I mentioned at the end include: the True Color of Mars, the Ringmakers of Saturn, 2012 Doomsday Revisited, a Young-Earth Creationist suing NASA, and a Nancy Leider clip show.

March 2, 2013

Podcast #67: Russian Meteor Conspiracies


I first said I wouldn’t do it, then I did it: Chelyabinsk meteor conspiracies! The episode is just a tad longer than the last one at a bit under 25 minutes.

The topics covered, besides setting the scene and what’s really known about the meteor, I talk about the coincidence of time; the coincidence of location; the conspiracies of missiles, UFOs, and Planet X; whether it was sent by some p—ed off deity; and the unfortunate scam that’s cropped up.

Besides all that, there’s a bit of feedback that lends itself to one of the (yes, of the two!!) puzzlers. And a quick announcement or two (depending on how you count ‘em) rounds out the episode. One of those announcements is that I will only be doing two episodes this month. Somehow I managed to put out 4 last month despite writing 3 grants, but this month is just insane along with 8 days of travel in the latter half. Sorry.

Remember that Expat will be on the next episode talking about some of the conspiracies related to politics, secrets, and engineering of Richard C. Hoagland. If you have something you really want me to ask him, feel free to send it in (or comment below).

December 9, 2012

New Blog Added to Blogroll — Interpose Mission


Quick post to mention that I’ve added a new blog to my fairly short “blogroll” list off to the side of every page, if you scroll down far enough past the monthly archive links. For those who like my Richard C. Hoagland -related posts, this blog’s for you: Interpose Mission.

The blog is different from Expat’s Dork Mission: The Emoluments of Mars, in that (a) Julian gives his real name, (b) it’s less snarky (so far … but we all end up deteriorating after awhile), and (c) it goes into a bit more detail about why ol’ Richard’s yarns are poorly spun and fraying throughout.

So far, he’s only done three posts. The first was an introductory post, similar to most peoples’, and then the next two were about Richard’s long-standing claim that Mars’ moon Phobos is an artificial spaceship and that Curiosity has found the ruins of apartment buildings on Mars.

I’ve added it to my RSS feed, and if you “like” more of Hoagland’s “ideas,” I suggest you do the same.

December 2, 2012

Richard C. Hoagland Sees Pink Energy Beam that’s “Proved to NOT Be a Hoax”


Introduction

Richard C. Hoagland, the official Coast to Coast AM science advisor (shudder), was on C2C last night for the first time with new-to-2011 host John B. Wells.

Among the images that Richard provided for the audience is the one below, and the caption was taken directly from the C2C website:

Hoagland's Energy Pyramid

A set of tourist’s photographs [of the famed Kulkulkan Pyramid at Chitzen Itza] taken last year (and, after investigation, proved to NOT be a hoax), showed an amazing beam of pink energy emanating from the top of the pyramid during an afternoon thunderstorm, apparently triggered by the electric fields of nearby lightning.

By the way, Richard, that’s Kukulkan, not Kulkulkan.

Lunar Ziggurat Redux?

The fact that he’s presenting this image as genuine is one thing. But the further fact that he writes in there, “and, after investigation, proved to NOT be a hoax,” is icing. I mean, seriously? Even the lunar ziggurat looked more genuine than this does.

Some digging shows that this was not taken “last year,” but back in 2009. July 24, 2009, actually — or at least as far as I can tell. In fact, at that link, we can get a higher-resolution version:

Original?

Original?

Manipulation Between Hoagland’s and the “Original”

One can already see that either Hoagland or someone else before it got to Hoagland had manipulated the image: Contrast had been increased, colors saturated, and things overall darkened except for the beam (look at the clouds to the lower left of the pyramid in Hoagland’s versus the 2009 version — they’re darker but the beam is lighter). One can easily do this with a Curves and a Saturation layer in Photoshop or similar graphics software.

It also appears as though a rectangular area of grass around the girls has been lightened relative to the surrounding grass. In the 2009 version, the grass is of generally uniform luminosity (brightness). On a brightness scale of 0-255, the G (green) value is around 40-50 for the grass throughout the image. But in Hoagland’s the G is around 40-45 on the periphery of the image, but 50-60 near the girls — about 20% brighter.

In fact, this extends not just from the grass, to to the sky surrounding the steps of the pyramid itself. In the sky to the right of the “original” version from Flickr, the greyscale value is roughly 105-115. In Hoagland’s version, it’s around 140-150 until you get close to the pyramid, and there it’s 160-175 or so.

I’m NOT saying that it was Richard Hoagland who made these changes. What I am saying is that it’s possible he did, but it’s definite that someone did between the 2009 version and what he presented last night.

Manipulation of the Original to the “Original”

There has also been some clear manipulation of the original that was posted to the Flickr stream that I’m calling the “original” that’s the real subject of debate.

The most obvious manipulation is to the left-hand side. It’s smeared. This is not something that can be caused by movement when holding the camera — if the camera moved, then the whole image would be blurry, not just the left 10% or so. This has nothing to do with the beam, but it clearly shows that the image shown in 2009 was NOT original as-taken-by-the-camera.

Then there’s of course the obvious — the “beam” of pink. I can’t prove it’s fake without the original or without a confession; all I can do is present a case that it’s more likely to be fake than real. Other than it looking fake, it defies some basic assumptions. Well, one mainly: It’s straight up-and-down with the image, but the image is not straight up-and-down relative to gravity.

What I mean is that the person who took the photo did not have the camera exactly vertical, it was tilted by a few degrees clockwise. You can tell this by the pyramid looking tilted and by measuring the should-be vertical walls up to the top of the pyramid — they’re tilted by a few degrees.

The beam, however, is not. It is exactly vertical relative to the edges of the photograph. That means that, if this were real, the beam would not be vertical, but it would have been tilted. Even the aliasing (slight shading as you transition from the beam to the sky) is exactly vertical, the same column of pixels up and down. That very strongly implies that someone made a rectangle in Photoshop, filled it with a gradient, and set the blending mode to color. Or some similar process.

In fact …

Look! A Beam of Yellow Greed Energy Shooting from the US Capitol Building!

Beam of Yellow Greed Energy from the US Capitol Building

Beam of Yellow Greed from the US Capitol Building

Looks about as genuine as the pink energy beam from the Kukulkan pyramid.

Moving On …

Searching around the internet finds that this has been discussed before. For example, there’s a Project Avalon Forum thread on the topic from 2010 that’s three pages long. These are people who really want to believe this is real. But even some of them are having issues with it. Things that jump out at me as highly suspect:

  • Originals have not been released for independent analysis.
  • The tourists claim that they did not see it by eye, only in the photo1.
  • Can’t find the original supposed photographer(s).
  • EXIF data (metadata) on the image which people were using to claim it’s original can be easily changed with software.
  • One of the original proponents/presenters has a history of hoaxing.
  • It was presented by a UFO researcher as part of claims for UFO evidence, not Richard’s would-be pink hyperdimensional energy triggered by a thunderstorm.
  • The photo was allegedly studied by “experts,” but who those people are and what experience they have is never mentioned nor referenced.

And that’s from a 10-minute read of the forum thread. Additional discussion here and here.

1This is a big red flag (similar with “ghost orbs” and other stuff). Cameras are designed to mimic the human eye. It wouldn’t do well for them to image things the eye can’t see because people take photos wanting to remember what they saw. To miss a giant pink energy beam strains credulity. The idea of, “Well, maybe it was just really brief and they managed to catch it in this photo!” also strains credulity because of the requisite timing — they’d have to somehow be lucky enough to click that iPhone shutter button at that exact moment of the beam they didn’t see with their eye and that (likely) 1/100th of a second happened to coincide with this incredibly brief burst of energy.

Final Thoughts

As with the ziggurat, I am NOT stating that Richard made this himself, that he hoaxed it. I’m also NOT stating with 100% certainty that this is a hoax.

What I am saying boils down to three primary items:

1. The version of the image that Richard presented last night shows significant manipulation from the “original.” Someone must have done the manipulation, and it may have been him.

2. The “original” version shows several red flags that indicate image manipulation and that the “beam” was placed in the image after it was taken with software. It is also easily duplicated in basic image processing software.

3. The original is not available for independent analysis, and the custody history of the photo raises numerous red flags.

In private conversation, I’d say this is clearly fake. In pure objective discussion, I present you with the above, and I think that the most likely explanation is that this was hoaxed by someone and is not a real phenomenon.

That Richard presents it as “proved to NOT be a hoax” – as I said in my original post on the lunar ziggurat, shows that (in my opinion) Richard C. Hoagland is incompetent with image analysis. If he or someone else were to explain the above red flags with something plausible, I’m all ears.

August 28, 2012

Dynamic Range and Shadows


Introduction

Part three of four posts in response to Michael Bara’s five-part post that allegedly destroys my arguments that the ziggurat on the moon is not real. Next post is already written (mostly) and will come out shortly, wrapping things up.

Dynamic Range

I really think I’ve covered this enough by this point, but I’ll do it briefly again.

Below is the “original” ziggurat image that Mike has linked to. Below that is a histogram of its pixel values. Note that this looks slightly different from what Photoshop will show the histogram to be. That’s because Photoshop fakes it a teensy bit. This histogram was created using very rigorous data analysis software (Igor Pro) and shows a few spikes and a few gaps in the greyscale coverage:

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum


Histogram of Pixel Values in Original Ziggurat Image

Histogram of Pixel Values in Original Ziggurat Image

The dynamic range available for this image is 8-bit, or 0 through 2^8-1, or 256 shades of grey (or 254 plus black plus white — semantics). The actual dynamic range the image covers is less than this — its range is only 12 through 169, or 157 shades of grey — just a little over 7-bit.

Compare that with the NASA image (whether you think the NASA image has been tampered with or not, that’s unimportant for this explanation), shown below. Its histogram spans values from 0 through 255, showing that it takes up the entire 8-bit range.

"Ziggurat" Area in NASA Photo AS11-38-5564

“Ziggurat” Area in NASA Photo AS11-38-5564


Histogram of Pixel Values in Original NASA Image of Ziggurat Location

Histogram of Pixel Values in Original NASA Image of Ziggurat Location

The immediate implication is that the ziggurat version has LOST roughly half of its information, its dynamic range. Or, if you’re of the conspiracy mindset, then the NASA version has been stretched to give it 2x the range.

Another thing we can look at is those spikes in the dark end and the gaps in the bright ends. I was honestly surprised that these were present in the NASA one because what this shows is that the curves (or levels) have been adjusted (and I say that with full realization of its ability to be quote-mined). The way you get the spikes are when you compress a wide range of shades into a narrower range. Because pixels must have an integer (whole number) value, rounding effects mean that you’ll get some shades with more than others.

Similarly, the bright end has been expanded. This means the opposite – you had a narrow range of shades and those were re-mapped to a wider range. Again, due to rounding, you can get some values with no pixels in it.

This can be done manually in software, or it can also be done automatically. Given the spacing of them, it looks like a relatively basic adjustment has been made rather than any more complicated mapping, for both the Call of Duty Zombies image with the ziggurat and NASA’s.

The fact that BOTH the ziggurat one and the NASA one have these gaps and spikes is evidence that both have been adjusted brightness-wise in software. But, taken with the noise in the ziggurat one, the smaller dynamic range, and the reduced detail, these all combine to make the case for the ziggurat version being a later generation image that’s been modified more than the NASA one (see previous post on noise and detail — this section was originally written for that post but I decided to move it to this one).

Dark Pixels, Shadow, and Light

What is also readily apparent in the NASA version is that there are many more black pixels in the region of interest. This could mean several very non-conspiracy things (as opposed to the “only” answer being that NASA took a black paintbrush to it).

One is what I have stated before and I think is a likely contributor: The image was put through an automatic processing code either during or after scanning, before being placed online. As a default in most scanning software, a histogram of the pixel values is created and anything darker than 0.1% is made to be shade 0, and anything brighter than 0.1% of the pixels is made to be shade 255. Sometimes, for some reason, this default is set to 1% instead, though it is also manually variable (usually).

Another part of this that I think is most likely is that, as I’ve said before, shadows on the moon are very dark. A rough back-of-the-envelope calculation is that earthshine, the only “direct” light into some sun-shadowed regions on the near side, is around 1000x fainter than sunlight would be. On the far side – and these photos are from the far side – there is no earthshine to contribute.

Which means the only other way to get light into the shadowed region would be scattering from the lunar surface itself. Mike misreads several things and calls me out where I admitted to making a mistake in my first video (Mike, how many mistakes have you made in this discussion? I’ve called you out on two very obvious ones in previous posts, and I call you out on another, below). Yes, you can get scattered light onto objects that are in shadow. If you have a small object casting a small shadow (such as a lunar module), then you have a very large surface surrounding it that will scatter relatively a lot of light into it. That’s why the Apollo astronauts are lit even when they are in the shadow of an object.

However, if you have a very large object – such as a 3-km-high crater rim – that casts a shadow – such as into the crater – then there is much less surrounding surface available to scatter light into the shadowed region. Also, remember that the moon reflects (on average) only about 10% of the light it receives*. So already any lunar surface that’s lit only by scattered light would be 10x fainter than the sun-lit part, and that’s assuming that ALL light scattered off the sun-lit lunar surface scatters into the shadowed parts to be reflected back into the camera lens, as opposed to the vast majority of it that just gets scattered into space.

*As opposed to Mike’s claim: “Since the lunar surface is made mostly of glass, titanium and aluminum, it tends to be very highly reflective.” Um, no (source 1, source 2).

Now, yes, there will still be some light scattered into the shadowed region, but it will be very little, relatively speaking, compared with the shadow of a small object, and it will be even less, relatively speaking, when compared with the sun-lit surrounding surface. For example, let’s look at AS11-38-5606:

Apollo Image AS11-38-5606

Apollo Image AS11-38-5606

This image was taken at a low sun angle, and there are a lot of shadows being cast. And look! They’re all very very black. The photographic exposure would need to be much longer in order to capture any of the minuscule amount of light scattered into the shadowed regions that were then scattered into the camera.

Now, before we go back to the ziggurat, let’s look at another part of this claim. Mike states: “I have seen hundreds, if not thousands, of lunar images where the shadows are far from “pitch-black (or almost pitch-black).””

In support of this, Mike points to images such as AS11-44-6609:

NASA Apollo Photo AS11-44-6609

NASA Apollo Photo AS11-44-6609

If you go to the full resolution version, you do see that the shadowed regions are not pitch black! WTF is going on!?

First, if you check the levels in photoshop, the 0.1% clip has either already been applied or it was never relevant to this image. So this does not falsify my previous statement of that being a possibility for the black shadows in the “ziggurat” one.

Second, let’s look at a few photos later, AS11-44-6612:

NASA Apollo Photo AS11-44-6612

NASA Apollo Photo AS11-44-6612

See that big crater up to the top? That’s the same one that’s near the middle-right in #-6609. Notice that instead of having a greyscale equivalent of around 25%, this time that very same shadow, taken just a few seconds or minutes later but at a different angle and part of the lens has decreased in brightness by over half. Meanwhile, shadows that are in roughly the same position of the frame (as in middle-right versus upper-middle) have a similar brightness as that shadow did in #-6609.

Also, look at the black space above the lunar surface (the right of the frame unless you’ve rotated it). The part of the sky near the top and bottom is ~5% black. The part near the middle is around 13% black. Or, 2-3x as bright, when space should be completely dark in this kind of exposure under ideal optics.

If you’re a photographer, you probably know where I’m going with this: The simplest explanation is that this is either a lens flare from shooting in the general direction of the sun, and/or this is grime on the lens causing some scattering. Less probable but still possible would be a light leak.

And, a closer examination of the shadowed areas does show some very, very faint detail that you can bring out, but only towards the middle of the image where that overall glow is.

Meanwhile, if you look through, say, the Apollo 11 image catalog and look at the B&W images, the shadows in pretty much every orbital photo are completely black. The shadows in the color ones are not.

As a photographer, this is the most likely explanation to me to explain AS11-44-6609 and images like it where Mike points to shadows that are lit:

  1. Original Photography:
    • Image was taken in the general direction of the sun so that glare was present.
    • And/Or, there was dirt on the lens or on the window through which the astronauts were shooting.
    • This caused a more brightly lit part of the image to be in a given location, supported by other images on the roll that show the same brightness in the same location of the frame rather than the same geographic location on the moon.
    • Some scattered light from the lunar surface, into the shadowed regions, off the shadowed regions, into the camera, was recorded.
  2. Image Scanning:
    • Negative or print was scanned.
    • Auto software does a 0.1% bright/dark clip, making the darkest parts black and brightest parts white. This image shows that effect in its histogram.
    • This causes shadows at the periphery to be black and show no detail.
    • Since the center is brighter, there’s no real effect to the brightness, and the very faint details from the scattered light are visible.

Contrast that with AS11-38-5564 (the ziggurat one), which has even illumination throughout. A simple levels clip would eliminate all or almost all detail in the shadowed regions. And/or, the original exposure was somewhat too short to record any scattered light. And/or the film used was not sensitive enough, which is bolstered as a potential explanation by what I noted above – that orbital B&W photography from the mission shows black shadows while orbital color shows a teensy bit of detail in some of the shadows.

In my opinion, that is a much more likely explanation given the appearance of the other photos in the Apollo magazines than what Mike claims, that NASA painted over it.

Which after long last brings us back to the ziggurat. Even in Mike’s exemplar, the stuff in the brightest shadow are BARELY visible, much less-so than the wall of his ziggurat. I suppose if Mike wants to claim that the ziggurat walls are 100% reflective, plus someone has done a bleep-load of enhancement in the area, then sure, he can come up with a way for the walls to be lit even when they are in shadow.

Do I think that’s the most likely explanation, especially taken in light of everything else? No.

Final Thoughts on This Part

One more part left in this series, and by this point I’ve really addressed the main, relevant points in Mike’s five-part series.

Far from “destroying” my arguments, I think at the very, very most, he’s raised some potential doubt for one or two small parts of my argument that, taken individually if one is conspiracy-minded and already believes in ancient artifacts on the moon, then those individual doubts could be used to make it look like the ziggurat is real.

However, taken as a whole, and taken with less of a conspiratorial mindset and a mindset where you must provide extraordinary evidence for your extraordinary claim, and you must show that the null hypothesis is rejected by a preponderance of indisputable evidence, then the ziggurat is not real.

August 24, 2012

Let’s Talk About Image Noise and Detail


Introduction

Part 2 of N in my response to Mike Bara’s 5-part post on the lunar ziggurat stuff.

I’ve talked about these things before a couple times, including in my last podcast episode, but clearly some did not understand it and some did not clearly read what I stated. So let’s go through this very carefully.

These are important concepts and applicable to a wide variety of applications – not only in identifying pseudoscience, but also in understanding how digital images work, and the likelihood that you who are currently reading this has a digital camera is pretty high.

Image Noise, Gaussian

I’ll quote first from a previous podcast episode:

All photographs have an inherent level of noise because of very basic laws of thermodynamics — in other words, the fact that the atoms and molecules are moving around means that you don’t know exactly what data recorded is real. The colder you can get your detector, the less noise there will be, which is why astronomers will sometimes cool their CCDs with liquid nitrogen or even liquid helium.

That said, I haven’t really explained what noise is, and I’m going to do so again from the digital perspective. There are two sources of noise. The first is what I just mentioned, where the atoms and electrons moving around will sometimes be recorded as a photon when there really wasn’t one. The cooler the detector, the less they’ll move around and so the less they’ll be detected. This is purely random, and so it will appear in some pixels more than others and so you don’t know what’s really going on.

The other kind of noise is purely statistical. The recording of photons by digital detectors is a statistical process, and it is governed by what we call “Poisson Statistics.” That means that there is an inherent, underlying uncertainty where you don’t know how many photons hit that pixel even though you have a real number that was recorded. The uncertainty is the square-root of the number that was recorded.

… What’s the effect of noise when you don’t have a lot of light recorded? Well, the vast majority of you out there listening to this probably already know because you’ve taken those low-light photos that turn out like crap. They’re fuzzy, the color probably looks like it has tiny dots of red or green or blue all over it, and there’s little dynamic range. That’s a noisy image because of the inherent uncertainty in the light hitting every pixel in your camera, but so that it wasn’t completely dark, your camera multiplied all the light – the noise included – in order to make something visible.

With the idea of noise in mind, after an image is taken, there is only one way to scientifically reduce the noise without any guesswork based on a computer algorithm: Shrink it. When you bin the pixels, as in doing something like combining a 2×2 set of four pixels into one, you are effectively adding together the light that was there, averaging it, and so reducing the amount of noise by a factor of 2. …

Noise is random across the whole thing, and it makes it look grainy. A perfectly smooth, white surface could look like a technicolor dust storm if you photograph it under low light.

Now with diagrams!

Below is a 500 by 500 pixel image made of pure, random, Gaussian noise. I created the noise in software and gave it a mean of 128 (neutral grey in 8-bit space) and a standard deviation of 25, meaning that about 68% of the pixels will be within ±25 shades of 128, about 95% will be within ±50 shades, and about 99.7% will be within ±75 shades. Also included below is a histogram showing the number of pixels at each shade of grey. As you can see, it’s a lovely bell curve that we all know and love with a mean of 128 and standard deviation of 25 (actual standard deviation is 24.946, but that’s because we’re not using an infinite number of points).

500x500 Pixel Image of Gaussian Noise

500×500 Pixel Image of Gaussian Noise


Histogram of 500x500 Pixel Image of Gaussian Noise

Histogram of 500×500 Pixel Image of Gaussian Noise

Now, in the diagram below, I’ve binned everything 2×2. As in, it’s now 250 by 250 pixels. What happens to the noise?

250x250 Pixel Image of Gaussian Noise

250×250 Pixel Image of Gaussian Noise


Histogram of 250x250 Pixel Image of Gaussian Noise

Histogram of 250×250 Pixel Image of Gaussian Noise

The distribution of pixel values is still a bell curve, but it’s narrower. The mean is still 128. But, the width of the noise – the amount of noise – has decreased to 12.439 … very close to the theoretical decrease of 2x to 12.5.

Now, bin it 4×4:

125x125 Pixel Image of Gaussian Noise

125×125 Pixel Image of Gaussian Noise


Histogram of 125x125 Pixel Image of Gaussian Noise

Histogram of 125×125 Pixel Image of Gaussian Noise

The Gaussian distribution is narrower still, this time its width is 6.193, every close to the theoretical value of a reduction of 4x to be 6.125.

When I select a 100 by 60 pixel region of shadow in the ziggurat image, the width of the noise is ±1.66 shades. Binning 2×2 and it drops to 1.58, 3×3 drops to 1.41, 4×4 drops to 1.33, 5×5 drops it to 1.29, and 10×10 drops it to 0.87.

So, that’s what random noise is, and that’s what happens when you decrease an image – you reduce the noise. This is an unambiguous and inalienable FACT.

Image Noise, “Salt & Pepper” and Texture

Another type of noise is simply defective pixels, or, in the analog days, defective film grains or cosmic rays hitting the film. These manifest as single, individual pixels scattered throughout the image that are either very bright or very dark relative to their surroundings.

A related kind of noise is from digitized printed photos, and this is a texture. If you’ve ever scanned in something like a 100-year-old photograph (or a poorly stored 10-year-old photograph), you’ve likely seen this kind of noise. In fact, Mike says that this is his working hypothesis as to why the shadowed regions aren’t one solid color now: Photo album residue. Um, even if that’s the case, this is still technically noise because it’s masking the signal.

Image Noise, Removing

As I’ve stated, reducing an image size is one way to reduce noise. It does, however, remove detail. The reason this whole thing got started was that Mike stated, quite directly: “What Mr. Robbins didn’t tell you is that a large chunk of the “noise” that appears in the image he “processed” was deliberately induced – by him. … In fact, anyone who knows anything about image enhancement knows that scaling/reducing an image induces more noise and reduces detail by design.” (emphasis his)

We’ll get to what detail is in the next section, but quite clearly and directly, Mike states that reducing an image in size creates noise. That statement is factually incorrect. In his latest post (part 2 of 5), he wants to know why I reduced the image size at all if it means reducing detail (which is talked about below). If he bothered to read in context, the reason was so that I could line up the ziggurat image with the NASA one to figure out exactly where it is. They weren’t at the same scale, so one had to be scaled relative to the other. It was easier to reduce the size of the smaller ziggurat image than increase the size of the much larger full image, so that’s what I did. It really doesn’t change much of anything.

Anyway, moving on … So, how do you remove noise without removing information that’s there? In reality, you cannot.

The method of reducing an image in size is one way, but clearly that will remove detail, and when you do this with a small image, you don’t necessarily have that detail to spare. Though as I’ve talked about before, astronomers will often use this method because it is the ONLY way to NOT introduce algorithm-generated information into the image.

Otherwise, there are several other methods that can be used to reduce the noise, but all of them will reduce the actual signal in the image to some extent. Depending on the exact algorithm and the exact kind of image you’re working with (as in, is it something like a forest versus clouds versus sand), different algorithms work better to preserve the original detail. But, you will always lose some of that detail.

One algorithm that’s easy to understand is called a “median” algorithm. This is an option in Photoshop, but it’s not the default “Reduce Noise” filter (I do not know and couldn’t easily find what the algorithm used by Photoshop is by default – it’s probably some proprietary version of a fancier algorithm). The median method takes a pixel and a window of pixels around it. Let’s just say 1 pixel around it to keep this simple.

So you have a pixel, and you have all the pixels that it touches, so you have 9 pixels in total. You then take the median value, which is the middle number of a sorted list. So if the pixels in your 3×3 block have values 105, 92, 73, 95, 255, 109, 103, 99, 107, then the median of those is 103 because that’s the middle number once you sort the list. You’d save that to the new version.

You would then move one pixel over in the original version and save the median of a 3×3 block with that one at the center to the new version. And so on.

Why median instead of average? Because that way hot pixels and dead pixels don’t affect you nearly as much. That pixel value of 255 would be a hot pixel in that 3×3 block and it would make the average 115 as opposed to the median, 10.5% dimmer. If, say, the 109-valued pixel were also hot, and it was 255, the median would STILL be 103, but the average would now be 132.

So that’s one method. The end result is that the outliers will be removed, and you’ve reduced the noise. Choosing a larger window reduces the noise more because you’re sampling a broader range of pixels from which to get a median (this is under the assumption that the number of hot and cold pixels is less than the number of good pixels).

But, in doing this, you are changing the information there, and every algorithm with which I’m familiar to remove noise will also remove some details. The details to go first are usually those small outliers that are real, like if you’re photographing a night scene and have some stars in your shot. Median noise reduction will remove those stars fairly effectively in addition to the noise. As I said, there are other algorithms that can be used depending on what exactly is in the image, but they will change the information that is there, and they will reduce detail by a measurable amount.

It should be noted that Mike’s default seems to be the Photoshop “Reduce Noise” filter. Here’s the result when he runs it on the image, ©his blog, with the “original” for comparison first:

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

AS11-38-5564, with Ziggurat, Noise Reduction by Mike Bara

AS11-38-5564, with Ziggurat, Noise Reduction by Mike Bara

Ignoring the contrast enhancement, some of the noise is reduced a bit, but so is some of the detail (which is something to which he admits (“It’s a bit blurry”)). Once you lose that detail, you cannot get it back. Well, unless you go to a previous version.

Detail, Resolution, and Pixel Scale

Noise is not at all related to detail except in its ability to obfuscate that detail. Detail is effectively the same as resolution, where according to my handy built-in Mac dictionary, resolution is defined for images as: “the smallest interval measurable by a scientific (esp. optical) instrument; the resolving power. The degree of detail visible in a photographic or television images.”

Pixel scale is similar and related — it is the length in the real world that a pixel spans. So if I take a photograph of my room, and I take another photograph with the same camera of the Grand Canyon, the length that each pixel covers in the first is going to be much smaller than the length that each pixel covers in the second. The pixel scale might be, say, 1 cm/px (~1/2 inch) for the photo of my room, while it might be around 10 m/px (~30 ft) for the photo of the Grand Canyon.

Don’t see the difference? It’s really subtle. Here’s a comment I got from an anonymous reviewer (whom I figured out who it was) of a paper I wrote last year that explains it in a way only an older curmudgeony scientist can:

Citing “resolution” in m/pixel is like citing distance in km/s. Scale = length/pixel; resolution = length, as is a function of several parameters in addition to sampling scale. Nearly everyone in the planetary community gets this wrong, which makes the terrestrial remote sensing community think we’re idiots.

So, my point in going through these definitions, besides getting them clearly out there, is that, obviously, if you are reducing an image in size to reduce the noise, you are obviously also reducing the detail, resolution, and pixel scale. Or is it increasing the pixel scale ’cause your pixels now cover a larger area? Whatever the proper direction is, you get the idea, and to suggest that I implied or stated otherwise is wrong.

Another thing we can do in this section is compare the detail of the ziggurat image with the NASA version, which returns to one of my original points that the NASA version shows more detail.

This is not something that Mike is disputing. But to him, it’s just evidence of a conspiracy. He simply dismisses this by stating, “NASA has tons of specialized software and high end computing resources that could easily do many of [these things like adding detail].” As I’ve stated before, if Michael simply wants to go the “this is a conspiracy and no amount of evidence you give will convince me otherwise,” then we can be done with this – something I’ll address in another post shortly.

Otherwise, the simplest explanation for this is that the ziggurat version is a later generation after having suffered several copyings. This is not a known fact, rather it is an educated opinion based on the available evidence that’s not influenced by the conspiracy mindset that Mike and Richard have.

Final Thoughts on These Points

Throughout Part 2 of his five-part rebuttal, Mike accuses me of making straw man arguments (though he doesn’t use that term), while doing that exact thing to me — making straw men of what I said and arguing against them. I never stated that reducing an image makes it better overall, I stated that the noise will decrease and so the noise profile will be better (as in less). Whether interpolation “enhances” detail is a topic for something else and is not at all directly related to the veracity of this lunar ziggurat, so I’m not addressing it here.

Part 3 to come on dynamic range, shadows, and internal reflections. At the moment, a part 4 is planned to be the last part and it’s going to examine language, tone, mentality, funding, and the overarching conspiracy mindset. It might be my last post on the subject, as well.

 

P.S. Not that this is any evidence for anything whatsoever, but I thought I’d throw out there the fact that even the people on the conspiracy website “Above Top Secret” say this is a hoax by someone. Again, this is evidence of nothing, really, but I thought it a tiny intriguing twist at least worth mentioning. Kinda like the fact that even though almost all UFOlogists think that the Billy Meier story is a hoax, Michael Horn keeps at it.

August 23, 2012

Where Is the Lunar Ziggurat, Anyway?


Introduction

This is I guess part 1 of what will be at least a three part reply to the five-part series that Mike has posted tonight. His posts are very long and so I’m unlikely to go into as many details as the nearly line-by-line of my first response to him. I also hope he’ll be kind enough to grant me a few days to respond before calling me further names – he took a week, after all – but we’ll see.

This post is specifically in response to his fourth post in the series in which he claims that the location of the ziggurat is something that I’ve missed entirely. There are of course plenty of names that he calls me in the process, which is also interesting considering that on his radio appearance tonight he’s accused me of lying about him, writing nasty comments, and putting attacks out.

I think if anyone has examined what I’ve written about this subject versus what Mike has, they’ll be able to see who actually does the writing of nasty comments, attacks, etc.

There are also numerous side-points made in Mike’s post that I think are side issues and not really worth dedicating time to mentioning. Suffice to say, you can read it if you really want to.

Anyway, the subject at hand: The crux of his “part 4″ is that Mike claimed I “missed” the location of the ziggurat by somewhere around one half to one mile, putting it outside of the LROC NAC frame I’ve been linking to. Since Mike doesn’t believe any digital space agency images these days anyway, I’m not sure why he chose to harp on this (well, likely because he thinks it makes me look stupid and “shows his [Stuart's] incompetence”), but we’ll go with it. He also says that this means all the detail regions of other images I’ve shown are showing the wrong place.

He mentioned this at least three times, and Mike claimed the actual location is 174.24°E, -8.90°N, and he did this by lining up a few craters.

As Mike has posted images directly from my blog during this “discussion,” I’ll link to one of his:

Mike's Ziggurat Location

Mike’s Ziggurat Location (click to enlarge)

Where’s the Ziggurat

I was sent this a few days ago by someone I know who prefers to go by the pseudonym “GoneToPlaid.” In it, he goes through what I think is a pretty good analysis, matching up not four, but 25 different points to show where the ziggurat location would be if it were real.

Here’s the series, and you can click on any of them for a larger version. The only issue I have with this is that his final footprint (the fourth image) just is the “lit” part of the alleged ziggurat and does not show the extent of the NE and NW “walls.”

AS11-38-5564 and M149377797 Ziggurat Location, A

AS11-38-5564 and M149377797 Ziggurat Location, part A

AS11-38-5564 and M149377797 Ziggurat Location, B

AS11-38-5564 and M149377797 Ziggurat Location, part B

AS11-38-5564 and M149377797 Ziggurat Location, C

AS11-38-5564 and M149377797 Ziggurat Location, part C

AS11-38-5564 and M149377797 Ziggurat Location, D

AS11-38-5564 and M149377797 Ziggurat Location, part D

And, here’s the image with the alleged ziggurat so you can compare and see that we’re talking about the same region in the Apollo AS11-38-5564 region.

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Mike of course makes my point then, since this is where his ziggurat is: “What he [Stuart] points to as the “feature” is … simply a hill and a crater next to (“behind”) it. … It’s obvious from comparing the LROC map on the web page he links to that we he thinks is the Ziggurat – or what he asserts to his “fans” is the Ziggurat – is actually just an “X” shaped feature some small distance away.”

Since that IS the location of the feature, Bara has really made my point: What I pointed it is a natural feature. Ergo, since what I pointed to is where his ziggurat is, and his location is wrong, the ziggurat is not a real feature.

Final Thoughts on This Issue

I had done my own analysis originally, way back in July, to find the location. That’s how I found the location in lat/lon. I had matched up about a dozen craters to do so. I happen to post GoneToPlaid’s versions above because I think he shows an excellent job in a good, easy-to-see presentation style.

Mike is showing four points that are incorrectly linked up to the overhead non-oblique shots in this case, and he has a few others in other places on his blog post. His craters are actually correct in his “Missed it by that Much” image on the above-linked blog post, but it is not in the next image.

I’m surprised that this is actually an issue, though perhaps I shouldn’t’ve been. Anyway, as is now I hope very clear, my initial placement of the ziggurat region was correct, Mike’s location is clearly not.

This doesn’t prove/disprove the ziggurat at all, but it does show more incorrect image analysis.

One could ask at this point why I keep talking about this. In fact, some have, on both sides of the “issue.” The reasons are several, and you can read much more on my thoughts on this in the comments section of this post, starting with Tara’s post.

But to briefly summarize, with every post I have made on the topic, I’ve tried to address this from a critical thinking standpoint as well as show how you can go searching for information on your own and figure out what’s going on. There are also numerous misconceptions floating around throughout this and they’re common, and they don’t just apply to this tiny, insignificant “issue.” For example, in this post I showed you how you can go do your own independent analysis to figure out where an image is on the lunar grid. Maybe that’ll be useful in Jeopardy some day.

Almost everything I’ve talked about is applicable to a much broader array of things, and also, I think, this process is important to show how to investigate claims. And, since every scientist has to be able to convince their own colleagues of their results, explicitly being able to “get all your ducks in a row” is an ongoing learning experience for my own career.

In terms of “What’s the Harm?”, in this kind of stuff, there really isn’t too much specifically. You can believe whatever you want. If you want to believe there’s a ziggurat in some location on the moon built by ancient aliens or whatever, fine, I really, honestly don’t care. I had never heard of the “Brookings Institute report” before I listened to Coast to Coast and heard Hoagland talk about it, and I can almost guarantee you that the vast majority of astronomers have never heard of it, either. But more on that in (probably) part 3.

But, when you then spend money on this kind of stuff, such as the people who gave money to send Richard Hoagland to test hyperdimensional physics stuff in Egypt during the Venus transit but then he didn’t go and hasn’t published anything on it, well, I see that as harm. Yes, it was those peoples’ money and they can do what they want with it, but if they made the choice to send Richard $100 instead of buying groceries for a week (as one message going around has claimed, though I don’t know if it’s real or not), that’s a problem.

Part 2 to come …

Next Page »

The Rubric Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,136 other followers