Exposing PseudoAstronomy

March 5, 2016

Do as I Say, Not as I Do to Find “Real” Image Anomalies


I finally submitted my first paper for peer-review in practically two years — roughly 350 hours in the last roughly 2 months to analyze the data and write and edit a paper on the craters on Pluto, Charon, Nix and Hydra. So now, in preparation for the big Lunar and Planetary Science Conference in two weeks, I have a few months of other, lunar, work to do in the next 12 days.

So, I’ve started to catch up with Richard Hoagland’s “The Other Side of Midnight” program. The “barely lovable” (as Art Bell has said) folks over at BellGab pointed me to a particular evening of January 30, 2016, where Richard had some of his imaging guys (yes, all guys) on talking about how to expose fakes. As in, people who fake anomalies in space images.

You can probably imagine that my eyebrows did more than rise just a bit.

I’m less than 20 minutes into the episode and already I’ve spotted some of the most ridiculous duplicity in what they are saying. Richard Hoagland and Will Farrar are saying over and over again that you have to go to the original data before you can say anything is real or not.

And they’ve pointed out some good examples, like the anomalies in Hale crater on Mars are all caused by the 3D projection and image compression done by the Mars Express images and it’s not there in the originals.

I’ll say it again: Richard stated on this program that doing any analysis on anything BUT the original images is completely useless. In fact, here’s one example, at about 16 minutes 15 seconds into the recording:

Will Farrar: “They’re going to claim they didn’t go out to get the thing…”

Richard Hoagland: “They didn’t go out and get, what? The original data?”

WF: “The raw. Yeah, the raw data, that’s–”

RH: “Well then it’s pointless! You blow them away on that basis alone! You can’t do science on second, third, fourth, fifth sources, you gotta go to the original. That’s the first rule!”

Another example, about 29 minutes 50 seconds into my recording, jumping off of Keith Laney saying that the first thing to do is get the raw data, Richard stated, “Yeah, that’s the first thing we all do! When we see something interesting – those of who who know how to do this ’cause we’ve been at this awhile – the first thing you do is go and find the NASA original. … Find the original. Do not go by what’s on the web. Never ever just go by what’s on the web, unless it is connected to original data step by step by step.”

I’m not 100% sure what he means by that last “unless…” part, unless it’s his way of giving himself an out. It’s hopelessly vague, for anyone could say that any product they make where they find an anomaly is from the original data and they can tell you the step-by-step process to get there. This was also at least the fifth time he talked about this, but the first time he gave himself the “unless,” so let’s proceed without it.

(Almost) everything that Richard has promulgated over the last few years is based on non-original images. To just mention just three, for examples:

(1) Everything he and others have done with Pluto and Charon has been done with third-generation data, at best. That is, raw data (1st) compressed on the craft, either lossy or lossless (2nd), and the posted lossy (a second layer of lossy) on public websites (3rd). The first batch of truly raw data will be released in April 2016, and it will only be what was on Earth as of encounter. Therefore, by Richard’s own rules, every analysis that he and others have done finding anomalies on Pluto and Charon is “pointless.”

(2) Everything he and others have done with Ceres and claims of cities and crashed spacecraft … see example 1 above. I’m not on the Dawn team, so I don’t know when their first or second batch of raw data will be publicly released. Therefore, by Richard’s own rules, every analysis that he and others have done finding anomalies on Ceres is “pointless.”

(3) His analysis of Chang’e 3 images claiming that there are giant glass structures on the moon was done with JPG-compressed images published on Chinese military websites. Not raw data. He claimed that this was proof that his analysis of Apollo images (which were 5th generation, at best, it’s been estimated) showing giant glass towers on the moon was real. Therefore, by Richard’s own rules, every analysis that he and others have done claiming from Apollo and Chang-e 3 images that there are giant glass cities on the moon is “pointless.”

Well … that was fun.

P.S. Around 15 minutes into the second hour of the program, Richard stated that you can’t possibly do any analysis on anything that’s only 30 pixels across. Well then, Expat’s deconstruction notwithstanding, Richard’s own statement completely disqualifies “Data’s Head” that he thinks he found in an image from Apollo on the moon that he claims shows an android’s head. It’s perhaps 15 pixels across, max.

Advertisement

August 27, 2015

Podcast Episode 139: New Horizons Pluto Encounter Conspiracies, Part 2


New Horizons’ pass
Through the Pluto system: Lots
Of crazy ensued.

Part 2 of the Great Pluto / New Horizons Conspiracies podcast mini-series is now posted. This one is loosely tied together through the theme of anomaly hunting, and it has a special guest star of (faulty) image analysis.

To be fair, again, all of these I have written about in my 11-part series. However, I know some people never read blogs and only listen to podcasts, and vice versa. So, I’m double-dipping. I don’t care. Again.

And it’s late at night … again … so I’ll close this brief post out by saying that I was recently interviewed not only on Steve Warner’s “Dark City” podcast, which you can directly listen to at this link, but I was also on Episode 363 of “The Reality Check” podcast to discuss New Horizons — and there really is only a smidgen of overlap between that TRC episode and my podcast episodes on the subject. So don’t not listen because you think that you’ll be hearing the same thing.

January 25, 2014

Episode 99: The Saga of the Lunar Ziggurat


Lunar ziggurat
Keeps on giving and giving …
Is there end in sight?

Sorry this one took so very, very long to get out. Jet lag is not fun. I gave two talks while I was in Australia, and both were versions of this, “The Saga of the Lunar Ziggurat.” The audio this time is from a recording at the Launceston Skeptics in the Pub event and a group about two weeks later in Melbourne. It was “live” and hence of variable quality, including street noise. But, the quality isn’t bad.

August 29, 2012

Final Words on the Lunar Ziggurat? Pareidolia, Language, and Conspiracy


Introduction

I’ve now written nearly a dozen posts and 19.5k words (notice I don’t claim 20,000, even though Mike did when he wrote 17,650) on this lunar ziggurat “issue:”

The purpose of this post is to wrap up a few loose ends and return to the beginning, where this started. So there are four sections to this post, then a summary of where we are and why I don’t think there’s much more to be said (though I may revise that thought) on this.

Pareidolia

To quote from Mike’s part 5 of 5 posts on this:

“The actual truth is that there is no such thing as “Pareidolia.” It’s just a phony academic sounding word the debunkers made up to fool people into thinking there is scholarly weight behind the concept. It’s actually a complete sham. … The word was actually first coined by a douchebag debunker (is that my first “douchebag” in this piece?! I must be getting soft) named Steven Goldstein in a 1994 issue of Skeptical Inquirer. Since then, every major debunker from Oberg to “Dr. Phil” has fallen back on it, but it is still a load of B.S. There is no such thing.”

First, let’s get this out of the way: I never claimed that the ziggurat image is pareidolia. It’s clearly not. The question for the ziggurat is whether someone superposed a terrestrial ziggurat on a lunar photograph.

As far as I can tell, Mike’s etymology of the word is correct — he may have used the same resource I did, and I can’t find any previous references. (Updated per comments: Actually, the term goes back at least to the mid-1800s. From an 1867 journal: “… or, there is necessary an external and individual object very nearly corresponding in character to the false perception, whose objective stimulus blends with the deficient subjective stimulus, and forms a single complete impression. This last is called by Dr. Kahlbaum, changing hallucination, partial hallucination, perception of secondary images, or pareidolia. Those manifestations which have been hitherto termed illusions, are only in very small proportion actual delusions of the senses (partial hallucinations). For the most part they are pure delusions of the judgement, while a few are false judgments, founded on imperfect perception, or deceptions produced in the peripheral organs of sense and in external conditions.”)

Regardless, claiming that there is no such thing is about at the level of Mike’s claiming that centrifugal force makes you heavier, an annular eclipse is when the moon is closer than normal to Earth, you measure the major and minor axes of an ellipse from two arbitrary points within it, and dark matter denial (stay tuned for a podcast on that last one at some point).

Whether it has a word or not, it is a real phenomenon. The Rorschach ink blot test was created to make use of pareidolia. People make pilgrimages to distant places because they think Jesus or Mary is visible within the knot of a tree or an oil spot on a building window. And that’s just visual pareidolia.

The whole “EVP” (electronic voice phenomenon) is an example of audio pareidolia where you think you hear something in random noise. Skeptoid had a good episode on this, #105.

I’m really not sure why Mike decided to introduce such a blatant falsehood about human perception when it’s not even relevant to the ziggurat stuff.

Language

Another loose end is language. I’ve commented on this before, but it bears some repeating. Mike’s language throughout this was originally pure insults, and when he realized I have a Ph.D., it turned into mocking conspiracy (see next section for more on that). Mine has been remarkably restrained (in my never humble opinion). I’ve refrained from direct insults except in my initial analysis, in which I said my opinion was that Richard was either lying that he had spent weeks studying the image, or that he was incompetent in that image analysis. As far as I can tell, those are the only direct insults, and they’re relatively minor at that.

Contrast that with, say, Mike’s entire Part 1 blog post on this stuff.

The only real progress we’ve made over the last month is that he’s stopped calling me a hater.

Mike also stated that I feel the need to brand him a “heretic,” which is a term I have never used nor implied. I found that particularly humorous because just this past week, Skeptoid addressed that very issue — the need of pseudoscientists to claim that they are being branded as heretics. To quote from Brian Dunning’s transcript:

“It’s noteworthy that the term “heretic” is only ever used by dogmatic authorities. For example, the Catholic church used it during the Inquisition. I’ve never heard a working scientist call anyone a heretic in reference to their scientific work; instead, they simply point out that they’re wrong and why. But promoters of pseudoscience want to be called heretics, because that would make the scientific mainstream into a dogmatic authority. Whenever you run into a lone researcher who’s outside the mainstream and claims to have been labeled a heretic, you have very good reason to be skeptical.” (emphasis his)

That’s really all I have to say on this aspect, but I thought it important, yet again, to point out.

Another thing about language, though. Mike has claimed to “destroy” my arguments and to provide absolute proof that the ziggurat is real. I, on the other hand, have never used such black-and-white terminology. My position has always been that it is my opinion, based on the available evidence, and based on my analysis that I’ve now gone through at great length, that the ziggurat is more likely to be fake than real.

You might think I’m pointing out semantics, but they’re important semantics. Scientists will rarely speak in terms of absolutes except in rare cases (for example, I’ve made declarative statements of facts about noise in images). When stating their position, it is almost always couched in “the evidence shows [this]” or “based on a preponderance of the evidence.” That’s because science is always open to revision, always open to being shown that previous conclusions were wrong based on new evidence brought to light.

And then there are the declarative statements of the pseudoscientists. There’s also, oftentimes, a failure to admit when they’ve made mistakes, even obvious, trivial ones that don’t really matter for their main arguments. I’ve pointed out many that Mike has made that don’t really impact his argument (and I’ve pointed out many he’s made that do impact his argument), but he’s never back-tracked on any of them.

Nor, as an aside, has he backtracked from any of the mistakes he made in his book, “The Choice.” For example, on August 12, someone wrote on his Facebook page: “Mike likes to say in his defence “I never said that, you are trying to get me to defend things I never said.” Well Mike, you DID say on page 32 of “The Choice” that centrifugal force makes us heavier. So you DID actually say that, and it’s simply completely wrong.”

Mike followed that up immediately with, “Show me the quote asshole. It doesn’t say that. And it was a misprint anyway.” Interesting how something isn’t there but that it was a misprint at the same time that it’s not there being wrong. And just last night, he’s now claiming that his book had two minor misprints, 10 words out of 50,000. Anyway, we’re getting somewhat off-topic, so if you’re at all interested in the many more than two basic, fundamental mistakes in “The Choice,” I’ll direct you to this post.

Fear and Conspiracy

Mike has claimed that it is fear (and money) that has driven me to write about this subject. Fear that my worldview will be turned upside-down, that I’m afraid of aliens or what alien artifacts would imply, that the Brookings Report is my Bible (you know, THE report, as opposed to all the other reports that think-tank has released over the decades), etc.

I know that regardless of what I say he won’t be convinced otherwise, but I’ll say it again anyway: It’s not true. As I have written innumerable times on this blog, the whole reason for doing science is to make new discoveries and overturn paradigms (and this is a real plug post for Skeptoid ’cause Dunning addressed this in the latest episode 324, too).

Let’s do a little test: Raise your hand if you recognize the name Albert Einstein. Now raise your other hand if you recognize the name Francis Everitt. For those who don’t have both hands raised, Everitt is the principle investigator of the Gravity Probe B mission that was a test of some of Einstein’s theories. He’s not a household name because he has upheld a paradigm; Einstein is a household name because he created it. ‘Nough said.

Which brings us to the conspiracy and likely why this will be my last post on this subject. After all this discussion, we’re really, in sum and substance, back at the beginning because almost all evidence that I have brought forth is simply dismissed as either apparently wrong (which I’ve explained is incorrect or likely incorrect) or it’s apparently not trustworthy because it’s all a conspiracy.

Mike claims that I lack honesty, and then he corrected himself on the radio and used the term “intellectual honesty.” Meanwhile, Mike has stated at least twice that he baited me with blog posts to do his work for him in finding other images of the location. And then he both dismissed them as part of the conspiracy while also saying that I had the location wrong, which I showed again was not the case. Lying about one’s reason for something and then dismissing it anyway when it shows what you don’t like … and then accusing me of intellectual dishonesty? Seriously?

I had taken more notes of stuff to say at this point, but after writing the above, I really don’t think any more needs to be said. It won’t convince anyone who believes what Mike says, and the people who don’t believe Mike are already convinced and know roughly what else I was going to say, anyway.

Real Quick – The Ziggy Location, Again

I think this bears repeating. Mike claims that I missed the location of the ziggurat.

Here’s my evidence that it’s where I claim it is, courtesy of “GoneToPlaid:”

AS11-38-5564 and M149377797 Ziggurat Location, D

AS11-38-5564 and M149377797 Ziggurat Location, D

Here’s Mike’s:

Location of Ziggurat According to Mike Bara

Location of Ziggurat According to Mike Bara

And here’s Mike’s with the actual, correct craters matched up:

Location of Ziggurat According to Mike Bara

Location of Ziggurat According to Mike Bara

As you can see, it’s fairly clear that Mike got his craters wrong, misjudging the scale and relative positions. He might be better off in the future paying attention to what the planetary geophysicist who actually studies craters says.

Where We Are Now

The question I asked a few posts ago was: What would it take to falsify your belief? Mike has not directly answered that. He’s also pointed out that he doesn’t give (a few swear words) what I think nor about my challenges. Which is then interesting that he spent so much time on responding.

I laid out three primary categories of reasons that I think it’s fake. Mike’s responses to each can be summarized by the indented, bulleted text below each.

1. Why there is less noise in the NASA original but more noise in Mike’s, and why is there more contrast (more pure black and more saturated highlights) in Mike’s? Both of these pretty much always indicate that the one with more noise and more contrast is a later generation … you can’t just Photoshop in more detail like that.

  • Mike spent a lot of time changing his definition of noise and going through a few misconceptions about it, but in the end, he claims that the noise in his version is texture from a poorly stored photo in an album that was later scanned, hence it’s an earlier generation because it’s from an old print. There is no evidence for this other than what he has interpreted as texture, and I argue that the more likely explanation is that it’s a late-generation copy.
  • Mike claims that there is more contrast in the NASA version because the black shadows are pure black (greyscale 0) while the shadows in his version are between ~18 and 31, so show a range. I argue that the range is due to noise, that the dynamic range of his version is roughly half the NASA version, and that the dynamic range within the bright areas is less in his version, thus supporting my statement that there’s more contrast in his version.
  • Mike misinterpreted my statement about Photoshopping in detail thinking I meant details like craters. The point still stands that once you have a saturated pixel, you cannot bring the information back without assumptions and then modeling what you think it should be.

2. Why other images of the same place taken by several different craft (including non-NASA ones), including images at almost 100x the original resolution of the Apollo photo, don’t show the feature.

  • One claim Mike made is that I missed the location of the ziggurat. I have shown that I did not.
  • He also claims that he does not believe any of the current NASA images nor those from the SELENE (Japanese) mission, nor much of anything else except the old Apollo images, and even then, only some of them such as the one that shows what – at first glance to most normal people – appears on its face to be fake. He clearly stated that if the Chinese images don’t show anything there, it’s because they’ve been pressured to not release them or they’re part of the conspiracy or some such thing.
  • He’s brought in other Apollo photographs of the region taken from orbit and when none showed a convincing feature, he stated that they were airbrushed out. Except for one of them, which to me, looks even less like a pareidolia-ized ziggurat than the first (though Mike doesn’t believe in pareidolia … see above).

3. Why the shadowed parts of his ziggurat are lit up when they’re in shadow, on top of a hill, and not facing anything that should reflect light at them?

  • Much of Mike’s response was that scattered light will brightly light any shadowed region, and he has seen hundreds of examples of this.
  • This is something I have stated – that scattered light can illuminate some things, faintly, but not to the effect it allegedly had on the ziggurat:
  • What he showed were mainly examples of scattered and refracted light within the optics of the camera itself rather than on the surface. One of his examples did have some stuff in shadow that was very, very faintly lit by scattered light.
  • To have the ziggurat shadowed part lit by scattered light would require an incredibly reflective surface that somehow withstood [insert time length] years of asteroid impacts to still reflect all the light that’s scattered onto it from a very small crater wall. I suppose this in itself is not impossible, but it strains credulity, especially when taken with all the other very unlikely things needed to be true for this to be real.

I could go through a timeline of stuff, too, but I don’t think that’s really worth getting into. The string of posts at the beginning shows it pretty well, I think.

So that’s where we are. Neither of us are going to convince the other, of course. I’ve stated for awhile now that this would end in one of probably three ways, in order of increasing likelihood:

  1. Mike would admit it’s likely a fake. (near-0% chance)
  2. Mike would just start to ignore it and move on with his other stuff.
  3. Mike would say that any evidence or explanation I bring forward is wrong or that he can dismiss it because it’s part of the conspiracy. After all, he already claims I’m bought and paid for so nothing I say can be trusted (Mike – how much do you make from promoting your ideas?). (near-100% chance)

Final Thoughts?

Clearly, Option #3 was always the most likely and it is primarily what he’s gone with. Which really gets me back to ¿why are we going through this whole thing, anyway?

I cannot read minds, though I often wish I could, but my guess is that Mike feels the need to defend this considering that he’s put so much effort into it and made it a centerpiece of his book due out in October. It also fits entirely within and reinforces the worldview that he sells (literally). He’s also said he really doesn’t care WHAT my analysis shows nor opinions are, so in that sense, I’m not sure why he’s decided to continue writing so much on it even after Richard Hoagland suggested he not.

I’ve continued on with this in part because I’m stubborn, but also because I’ve been learning and teaching as I’ve been going. In terms of the former, I’ve learned how to obtain and process the SELENE images, how to be more precise, how to create videos, and techniques to bolster my claims. That will help me not just in this kind of education and public outreach work, but also in my career. For example, I’m headed to a conference in Flagstaff, AZ (USA) next month on cratering and I’ll be giving two presentations. My work is going to be challenged. If I can’t defend it, then it falls and I’m back to square one.

In terms of the latter, I’ve tried to gear each blog post on this not just towards the boring “debunking” stuff, but to illustrate to everyone who’s reading how to do their own investigations into this stuff and NOT to take my word for things, and also about how certain things are done and stuff works. For example, I’ve gone into great depth now in a few posts AND two podcasts on image processing and about images in general, such as dynamic range, noise, geometric correction, and how some basic filters work. In an age where nearly everyone who has internet (and so is reading this) has a digital camera, this is useful information to have, and I’ll likely refer back to it in future posts on many disparate topics.

But, by this point, I think the impasse is more obvious than ever. I acknowledge that some of Mike’s ideas are possible (i.e., the poorly stored print idea), but in my opinion they are unlikely – and many unlikely things would ALL need to be true for this – when compared with the null hypothesis: The ziggurat is a hoax by someone. Mike has not admitted to being wrong even when he’s contradicted himself, and pretty much every argument I’ve made that he hasn’t attempted to show is wrong has been relegated to a conspiracy. Nothing I say is going to change his mind on that, though that was pretty much known from the beginning.

I think it is probably time for a graceful exit on this issue by both parties. Mike’s explained his position, I’ve explained mine, and you, the reader, are encouraged to do your own investigation and make up your own mind. If you decide the conspiracy is accurate, and you like the way Mike argues by primarily flinging insults, them more power to you because you’ve made The Choice, go buy Mike’s books, spend money to hear him talk, and have fun.

 

Oh, and P.S., this should not be construed as a concession post by any stretch of the imagination.

August 28, 2012

Dynamic Range and Shadows


Introduction

Part three of four posts in response to Michael Bara’s five-part post that allegedly destroys my arguments that the ziggurat on the moon is not real. Next post is already written (mostly) and will come out shortly, wrapping things up.

Dynamic Range

I really think I’ve covered this enough by this point, but I’ll do it briefly again.

Below is the “original” ziggurat image that Mike has linked to. Below that is a histogram of its pixel values. Note that this looks slightly different from what Photoshop will show the histogram to be. That’s because Photoshop fakes it a teensy bit. This histogram was created using very rigorous data analysis software (Igor Pro) and shows a few spikes and a few gaps in the greyscale coverage:

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum


Histogram of Pixel Values in Original Ziggurat Image

Histogram of Pixel Values in Original Ziggurat Image

The dynamic range available for this image is 8-bit, or 0 through 2^8-1, or 256 shades of grey (or 254 plus black plus white — semantics). The actual dynamic range the image covers is less than this — its range is only 12 through 169, or 157 shades of grey — just a little over 7-bit.

Compare that with the NASA image (whether you think the NASA image has been tampered with or not, that’s unimportant for this explanation), shown below. Its histogram spans values from 0 through 255, showing that it takes up the entire 8-bit range.

"Ziggurat" Area in NASA Photo AS11-38-5564

“Ziggurat” Area in NASA Photo AS11-38-5564


Histogram of Pixel Values in Original NASA Image of Ziggurat Location

Histogram of Pixel Values in Original NASA Image of Ziggurat Location

The immediate implication is that the ziggurat version has LOST roughly half of its information, its dynamic range. Or, if you’re of the conspiracy mindset, then the NASA version has been stretched to give it 2x the range.

Another thing we can look at is those spikes in the dark end and the gaps in the bright ends. I was honestly surprised that these were present in the NASA one because what this shows is that the curves (or levels) have been adjusted (and I say that with full realization of its ability to be quote-mined). The way you get the spikes are when you compress a wide range of shades into a narrower range. Because pixels must have an integer (whole number) value, rounding effects mean that you’ll get some shades with more than others.

Similarly, the bright end has been expanded. This means the opposite – you had a narrow range of shades and those were re-mapped to a wider range. Again, due to rounding, you can get some values with no pixels in it.

This can be done manually in software, or it can also be done automatically. Given the spacing of them, it looks like a relatively basic adjustment has been made rather than any more complicated mapping, for both the Call of Duty Zombies image with the ziggurat and NASA’s.

The fact that BOTH the ziggurat one and the NASA one have these gaps and spikes is evidence that both have been adjusted brightness-wise in software. But, taken with the noise in the ziggurat one, the smaller dynamic range, and the reduced detail, these all combine to make the case for the ziggurat version being a later generation image that’s been modified more than the NASA one (see previous post on noise and detail — this section was originally written for that post but I decided to move it to this one).

Dark Pixels, Shadow, and Light

What is also readily apparent in the NASA version is that there are many more black pixels in the region of interest. This could mean several very non-conspiracy things (as opposed to the “only” answer being that NASA took a black paintbrush to it).

One is what I have stated before and I think is a likely contributor: The image was put through an automatic processing code either during or after scanning, before being placed online. As a default in most scanning software, a histogram of the pixel values is created and anything darker than 0.1% is made to be shade 0, and anything brighter than 0.1% of the pixels is made to be shade 255. Sometimes, for some reason, this default is set to 1% instead, though it is also manually variable (usually).

Another part of this that I think is most likely is that, as I’ve said before, shadows on the moon are very dark. A rough back-of-the-envelope calculation is that earthshine, the only “direct” light into some sun-shadowed regions on the near side, is around 1000x fainter than sunlight would be. On the far side – and these photos are from the far side – there is no earthshine to contribute.

Which means the only other way to get light into the shadowed region would be scattering from the lunar surface itself. Mike misreads several things and calls me out where I admitted to making a mistake in my first video (Mike, how many mistakes have you made in this discussion? I’ve called you out on two very obvious ones in previous posts, and I call you out on another, below). Yes, you can get scattered light onto objects that are in shadow. If you have a small object casting a small shadow (such as a lunar module), then you have a very large surface surrounding it that will scatter relatively a lot of light into it. That’s why the Apollo astronauts are lit even when they are in the shadow of an object.

However, if you have a very large object – such as a 3-km-high crater rim – that casts a shadow – such as into the crater – then there is much less surrounding surface available to scatter light into the shadowed region. Also, remember that the moon reflects (on average) only about 10% of the light it receives*. So already any lunar surface that’s lit only by scattered light would be 10x fainter than the sun-lit part, and that’s assuming that ALL light scattered off the sun-lit lunar surface scatters into the shadowed parts to be reflected back into the camera lens, as opposed to the vast majority of it that just gets scattered into space.

*As opposed to Mike’s claim: “Since the lunar surface is made mostly of glass, titanium and aluminum, it tends to be very highly reflective.” Um, no (source 1, source 2).

Now, yes, there will still be some light scattered into the shadowed region, but it will be very little, relatively speaking, compared with the shadow of a small object, and it will be even less, relatively speaking, when compared with the sun-lit surrounding surface. For example, let’s look at AS11-38-5606:

Apollo Image AS11-38-5606

Apollo Image AS11-38-5606

This image was taken at a low sun angle, and there are a lot of shadows being cast. And look! They’re all very very black. The photographic exposure would need to be much longer in order to capture any of the minuscule amount of light scattered into the shadowed regions that were then scattered into the camera.

Now, before we go back to the ziggurat, let’s look at another part of this claim. Mike states: “I have seen hundreds, if not thousands, of lunar images where the shadows are far from “pitch-black (or almost pitch-black).””

In support of this, Mike points to images such as AS11-44-6609:

NASA Apollo Photo AS11-44-6609

NASA Apollo Photo AS11-44-6609

If you go to the full resolution version, you do see that the shadowed regions are not pitch black! WTF is going on!?

First, if you check the levels in photoshop, the 0.1% clip has either already been applied or it was never relevant to this image. So this does not falsify my previous statement of that being a possibility for the black shadows in the “ziggurat” one.

Second, let’s look at a few photos later, AS11-44-6612:

NASA Apollo Photo AS11-44-6612

NASA Apollo Photo AS11-44-6612

See that big crater up to the top? That’s the same one that’s near the middle-right in #-6609. Notice that instead of having a greyscale equivalent of around 25%, this time that very same shadow, taken just a few seconds or minutes later but at a different angle and part of the lens has decreased in brightness by over half. Meanwhile, shadows that are in roughly the same position of the frame (as in middle-right versus upper-middle) have a similar brightness as that shadow did in #-6609.

Also, look at the black space above the lunar surface (the right of the frame unless you’ve rotated it). The part of the sky near the top and bottom is ~5% black. The part near the middle is around 13% black. Or, 2-3x as bright, when space should be completely dark in this kind of exposure under ideal optics.

If you’re a photographer, you probably know where I’m going with this: The simplest explanation is that this is either a lens flare from shooting in the general direction of the sun, and/or this is grime on the lens causing some scattering. Less probable but still possible would be a light leak.

And, a closer examination of the shadowed areas does show some very, very faint detail that you can bring out, but only towards the middle of the image where that overall glow is.

Meanwhile, if you look through, say, the Apollo 11 image catalog and look at the B&W images, the shadows in pretty much every orbital photo are completely black. The shadows in the color ones are not.

As a photographer, this is the most likely explanation to me to explain AS11-44-6609 and images like it where Mike points to shadows that are lit:

  1. Original Photography:
    • Image was taken in the general direction of the sun so that glare was present.
    • And/Or, there was dirt on the lens or on the window through which the astronauts were shooting.
    • This caused a more brightly lit part of the image to be in a given location, supported by other images on the roll that show the same brightness in the same location of the frame rather than the same geographic location on the moon.
    • Some scattered light from the lunar surface, into the shadowed regions, off the shadowed regions, into the camera, was recorded.
  2. Image Scanning:
    • Negative or print was scanned.
    • Auto software does a 0.1% bright/dark clip, making the darkest parts black and brightest parts white. This image shows that effect in its histogram.
    • This causes shadows at the periphery to be black and show no detail.
    • Since the center is brighter, there’s no real effect to the brightness, and the very faint details from the scattered light are visible.

Contrast that with AS11-38-5564 (the ziggurat one), which has even illumination throughout. A simple levels clip would eliminate all or almost all detail in the shadowed regions. And/or, the original exposure was somewhat too short to record any scattered light. And/or the film used was not sensitive enough, which is bolstered as a potential explanation by what I noted above – that orbital B&W photography from the mission shows black shadows while orbital color shows a teensy bit of detail in some of the shadows.

In my opinion, that is a much more likely explanation given the appearance of the other photos in the Apollo magazines than what Mike claims, that NASA painted over it.

Which after long last brings us back to the ziggurat. Even in Mike’s exemplar, the stuff in the brightest shadow are BARELY visible, much less-so than the wall of his ziggurat. I suppose if Mike wants to claim that the ziggurat walls are 100% reflective, plus someone has done a bleep-load of enhancement in the area, then sure, he can come up with a way for the walls to be lit even when they are in shadow.

Do I think that’s the most likely explanation, especially taken in light of everything else? No.

Final Thoughts on This Part

One more part left in this series, and by this point I’ve really addressed the main, relevant points in Mike’s five-part series.

Far from “destroying” my arguments, I think at the very, very most, he’s raised some potential doubt for one or two small parts of my argument that, taken individually if one is conspiracy-minded and already believes in ancient artifacts on the moon, then those individual doubts could be used to make it look like the ziggurat is real.

However, taken as a whole, and taken with less of a conspiratorial mindset and a mindset where you must provide extraordinary evidence for your extraordinary claim, and you must show that the null hypothesis is rejected by a preponderance of indisputable evidence, then the ziggurat is not real.

August 24, 2012

Let’s Talk About Image Noise and Detail


Introduction

Part 2 of N in my response to Mike Bara’s 5-part post on the lunar ziggurat stuff.

I’ve talked about these things before a couple times, including in my last podcast episode, but clearly some did not understand it and some did not clearly read what I stated. So let’s go through this very carefully.

These are important concepts and applicable to a wide variety of applications – not only in identifying pseudoscience, but also in understanding how digital images work, and the likelihood that you who are currently reading this has a digital camera is pretty high.

Image Noise, Gaussian

I’ll quote first from a previous podcast episode:

All photographs have an inherent level of noise because of very basic laws of thermodynamics — in other words, the fact that the atoms and molecules are moving around means that you don’t know exactly what data recorded is real. The colder you can get your detector, the less noise there will be, which is why astronomers will sometimes cool their CCDs with liquid nitrogen or even liquid helium.

That said, I haven’t really explained what noise is, and I’m going to do so again from the digital perspective. There are two sources of noise. The first is what I just mentioned, where the atoms and electrons moving around will sometimes be recorded as a photon when there really wasn’t one. The cooler the detector, the less they’ll move around and so the less they’ll be detected. This is purely random, and so it will appear in some pixels more than others and so you don’t know what’s really going on.

The other kind of noise is purely statistical. The recording of photons by digital detectors is a statistical process, and it is governed by what we call “Poisson Statistics.” That means that there is an inherent, underlying uncertainty where you don’t know how many photons hit that pixel even though you have a real number that was recorded. The uncertainty is the square-root of the number that was recorded.

… What’s the effect of noise when you don’t have a lot of light recorded? Well, the vast majority of you out there listening to this probably already know because you’ve taken those low-light photos that turn out like crap. They’re fuzzy, the color probably looks like it has tiny dots of red or green or blue all over it, and there’s little dynamic range. That’s a noisy image because of the inherent uncertainty in the light hitting every pixel in your camera, but so that it wasn’t completely dark, your camera multiplied all the light – the noise included – in order to make something visible.

With the idea of noise in mind, after an image is taken, there is only one way to scientifically reduce the noise without any guesswork based on a computer algorithm: Shrink it. When you bin the pixels, as in doing something like combining a 2×2 set of four pixels into one, you are effectively adding together the light that was there, averaging it, and so reducing the amount of noise by a factor of 2. …

Noise is random across the whole thing, and it makes it look grainy. A perfectly smooth, white surface could look like a technicolor dust storm if you photograph it under low light.

Now with diagrams!

Below is a 500 by 500 pixel image made of pure, random, Gaussian noise. I created the noise in software and gave it a mean of 128 (neutral grey in 8-bit space) and a standard deviation of 25, meaning that about 68% of the pixels will be within ±25 shades of 128, about 95% will be within ±50 shades, and about 99.7% will be within ±75 shades. Also included below is a histogram showing the number of pixels at each shade of grey. As you can see, it’s a lovely bell curve that we all know and love with a mean of 128 and standard deviation of 25 (actual standard deviation is 24.946, but that’s because we’re not using an infinite number of points).

500x500 Pixel Image of Gaussian Noise

500×500 Pixel Image of Gaussian Noise


Histogram of 500x500 Pixel Image of Gaussian Noise

Histogram of 500×500 Pixel Image of Gaussian Noise

Now, in the diagram below, I’ve binned everything 2×2. As in, it’s now 250 by 250 pixels. What happens to the noise?

250x250 Pixel Image of Gaussian Noise

250×250 Pixel Image of Gaussian Noise


Histogram of 250x250 Pixel Image of Gaussian Noise

Histogram of 250×250 Pixel Image of Gaussian Noise

The distribution of pixel values is still a bell curve, but it’s narrower. The mean is still 128. But, the width of the noise – the amount of noise – has decreased to 12.439 … very close to the theoretical decrease of 2x to 12.5.

Now, bin it 4×4:

125x125 Pixel Image of Gaussian Noise

125×125 Pixel Image of Gaussian Noise


Histogram of 125x125 Pixel Image of Gaussian Noise

Histogram of 125×125 Pixel Image of Gaussian Noise

The Gaussian distribution is narrower still, this time its width is 6.193, every close to the theoretical value of a reduction of 4x to be 6.125.

When I select a 100 by 60 pixel region of shadow in the ziggurat image, the width of the noise is ±1.66 shades. Binning 2×2 and it drops to 1.58, 3×3 drops to 1.41, 4×4 drops to 1.33, 5×5 drops it to 1.29, and 10×10 drops it to 0.87.

So, that’s what random noise is, and that’s what happens when you decrease an image – you reduce the noise. This is an unambiguous and inalienable FACT.

Image Noise, “Salt & Pepper” and Texture

Another type of noise is simply defective pixels, or, in the analog days, defective film grains or cosmic rays hitting the film. These manifest as single, individual pixels scattered throughout the image that are either very bright or very dark relative to their surroundings.

A related kind of noise is from digitized printed photos, and this is a texture. If you’ve ever scanned in something like a 100-year-old photograph (or a poorly stored 10-year-old photograph), you’ve likely seen this kind of noise. In fact, Mike says that this is his working hypothesis as to why the shadowed regions aren’t one solid color now: Photo album residue. Um, even if that’s the case, this is still technically noise because it’s masking the signal.

Image Noise, Removing

As I’ve stated, reducing an image size is one way to reduce noise. It does, however, remove detail. The reason this whole thing got started was that Mike stated, quite directly: “What Mr. Robbins didn’t tell you is that a large chunk of the “noise” that appears in the image he “processed” was deliberately induced – by him. … In fact, anyone who knows anything about image enhancement knows that scaling/reducing an image induces more noise and reduces detail by design.” (emphasis his)

We’ll get to what detail is in the next section, but quite clearly and directly, Mike states that reducing an image in size creates noise. That statement is factually incorrect. In his latest post (part 2 of 5), he wants to know why I reduced the image size at all if it means reducing detail (which is talked about below). If he bothered to read in context, the reason was so that I could line up the ziggurat image with the NASA one to figure out exactly where it is. They weren’t at the same scale, so one had to be scaled relative to the other. It was easier to reduce the size of the smaller ziggurat image than increase the size of the much larger full image, so that’s what I did. It really doesn’t change much of anything.

Anyway, moving on … So, how do you remove noise without removing information that’s there? In reality, you cannot.

The method of reducing an image in size is one way, but clearly that will remove detail, and when you do this with a small image, you don’t necessarily have that detail to spare. Though as I’ve talked about before, astronomers will often use this method because it is the ONLY way to NOT introduce algorithm-generated information into the image.

Otherwise, there are several other methods that can be used to reduce the noise, but all of them will reduce the actual signal in the image to some extent. Depending on the exact algorithm and the exact kind of image you’re working with (as in, is it something like a forest versus clouds versus sand), different algorithms work better to preserve the original detail. But, you will always lose some of that detail.

One algorithm that’s easy to understand is called a “median” algorithm. This is an option in Photoshop, but it’s not the default “Reduce Noise” filter (I do not know and couldn’t easily find what the algorithm used by Photoshop is by default – it’s probably some proprietary version of a fancier algorithm). The median method takes a pixel and a window of pixels around it. Let’s just say 1 pixel around it to keep this simple.

So you have a pixel, and you have all the pixels that it touches, so you have 9 pixels in total. You then take the median value, which is the middle number of a sorted list. So if the pixels in your 3×3 block have values 105, 92, 73, 95, 255, 109, 103, 99, 107, then the median of those is 103 because that’s the middle number once you sort the list. You’d save that to the new version.

You would then move one pixel over in the original version and save the median of a 3×3 block with that one at the center to the new version. And so on.

Why median instead of average? Because that way hot pixels and dead pixels don’t affect you nearly as much. That pixel value of 255 would be a hot pixel in that 3×3 block and it would make the average 115 as opposed to the median, 10.5% dimmer. If, say, the 109-valued pixel were also hot, and it was 255, the median would STILL be 103, but the average would now be 132.

So that’s one method. The end result is that the outliers will be removed, and you’ve reduced the noise. Choosing a larger window reduces the noise more because you’re sampling a broader range of pixels from which to get a median (this is under the assumption that the number of hot and cold pixels is less than the number of good pixels).

But, in doing this, you are changing the information there, and every algorithm with which I’m familiar to remove noise will also remove some details. The details to go first are usually those small outliers that are real, like if you’re photographing a night scene and have some stars in your shot. Median noise reduction will remove those stars fairly effectively in addition to the noise. As I said, there are other algorithms that can be used depending on what exactly is in the image, but they will change the information that is there, and they will reduce detail by a measurable amount.

It should be noted that Mike’s default seems to be the Photoshop “Reduce Noise” filter. Here’s the result when he runs it on the image, ©his blog, with the “original” for comparison first:

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

AS11-38-5564, with Ziggurat, Noise Reduction by Mike Bara

AS11-38-5564, with Ziggurat, Noise Reduction by Mike Bara

Ignoring the contrast enhancement, some of the noise is reduced a bit, but so is some of the detail (which is something to which he admits (“It’s a bit blurry”)). Once you lose that detail, you cannot get it back. Well, unless you go to a previous version.

Detail, Resolution, and Pixel Scale

Noise is not at all related to detail except in its ability to obfuscate that detail. Detail is effectively the same as resolution, where according to my handy built-in Mac dictionary, resolution is defined for images as: “the smallest interval measurable by a scientific (esp. optical) instrument; the resolving power. The degree of detail visible in a photographic or television images.”

Pixel scale is similar and related — it is the length in the real world that a pixel spans. So if I take a photograph of my room, and I take another photograph with the same camera of the Grand Canyon, the length that each pixel covers in the first is going to be much smaller than the length that each pixel covers in the second. The pixel scale might be, say, 1 cm/px (~1/2 inch) for the photo of my room, while it might be around 10 m/px (~30 ft) for the photo of the Grand Canyon.

Don’t see the difference? It’s really subtle. Here’s a comment I got from an anonymous reviewer (whom I figured out who it was) of a paper I wrote last year that explains it in a way only an older curmudgeony scientist can:

Citing “resolution” in m/pixel is like citing distance in km/s. Scale = length/pixel; resolution = length, as is a function of several parameters in addition to sampling scale. Nearly everyone in the planetary community gets this wrong, which makes the terrestrial remote sensing community think we’re idiots.

So, my point in going through these definitions, besides getting them clearly out there, is that, obviously, if you are reducing an image in size to reduce the noise, you are obviously also reducing the detail, resolution, and pixel scale. Or is it increasing the pixel scale ’cause your pixels now cover a larger area? Whatever the proper direction is, you get the idea, and to suggest that I implied or stated otherwise is wrong.

Another thing we can do in this section is compare the detail of the ziggurat image with the NASA version, which returns to one of my original points that the NASA version shows more detail.

This is not something that Mike is disputing. But to him, it’s just evidence of a conspiracy. He simply dismisses this by stating, “NASA has tons of specialized software and high end computing resources that could easily do many of [these things like adding detail].” As I’ve stated before, if Michael simply wants to go the “this is a conspiracy and no amount of evidence you give will convince me otherwise,” then we can be done with this – something I’ll address in another post shortly.

Otherwise, the simplest explanation for this is that the ziggurat version is a later generation after having suffered several copyings. This is not a known fact, rather it is an educated opinion based on the available evidence that’s not influenced by the conspiracy mindset that Mike and Richard have.

Final Thoughts on These Points

Throughout Part 2 of his five-part rebuttal, Mike accuses me of making straw man arguments (though he doesn’t use that term), while doing that exact thing to me — making straw men of what I said and arguing against them. I never stated that reducing an image makes it better overall, I stated that the noise will decrease and so the noise profile will be better (as in less). Whether interpolation “enhances” detail is a topic for something else and is not at all directly related to the veracity of this lunar ziggurat, so I’m not addressing it here.

Part 3 to come on dynamic range, shadows, and internal reflections. At the moment, a part 4 is planned to be the last part and it’s going to examine language, tone, mentality, funding, and the overarching conspiracy mindset. It might be my last post on the subject, as well.

 

P.S. Not that this is any evidence for anything whatsoever, but I thought I’d throw out there the fact that even the people on the conspiracy website “Above Top Secret” say this is a hoax by someone. Again, this is evidence of nothing, really, but I thought it a tiny intriguing twist at least worth mentioning. Kinda like the fact that even though almost all UFOlogists think that the Billy Meier story is a hoax, Michael Horn keeps at it.

August 23, 2012

Where Is the Lunar Ziggurat, Anyway?


Introduction

This is I guess part 1 of what will be at least a three part reply to the five-part series that Mike has posted tonight. His posts are very long and so I’m unlikely to go into as many details as the nearly line-by-line of my first response to him. I also hope he’ll be kind enough to grant me a few days to respond before calling me further names – he took a week, after all – but we’ll see.

This post is specifically in response to his fourth post in the series in which he claims that the location of the ziggurat is something that I’ve missed entirely. There are of course plenty of names that he calls me in the process, which is also interesting considering that on his radio appearance tonight he’s accused me of lying about him, writing nasty comments, and putting attacks out.

I think if anyone has examined what I’ve written about this subject versus what Mike has, they’ll be able to see who actually does the writing of nasty comments, attacks, etc.

There are also numerous side-points made in Mike’s post that I think are side issues and not really worth dedicating time to mentioning. Suffice to say, you can read it if you really want to.

Anyway, the subject at hand: The crux of his “part 4” is that Mike claimed I “missed” the location of the ziggurat by somewhere around one half to one mile, putting it outside of the LROC NAC frame I’ve been linking to. Since Mike doesn’t believe any digital space agency images these days anyway, I’m not sure why he chose to harp on this (well, likely because he thinks it makes me look stupid and “shows his [Stuart’s] incompetence”), but we’ll go with it. He also says that this means all the detail regions of other images I’ve shown are showing the wrong place.

He mentioned this at least three times, and Mike claimed the actual location is 174.24°E, -8.90°N, and he did this by lining up a few craters.

As Mike has posted images directly from my blog during this “discussion,” I’ll link to one of his:

Mike's Ziggurat Location

Mike’s Ziggurat Location (click to enlarge)

Where’s the Ziggurat

I was sent this a few days ago by someone I know who prefers to go by the pseudonym “GoneToPlaid.” In it, he goes through what I think is a pretty good analysis, matching up not four, but 25 different points to show where the ziggurat location would be if it were real.

Here’s the series, and you can click on any of them for a larger version. The only issue I have with this is that his final footprint (the fourth image) just is the “lit” part of the alleged ziggurat and does not show the extent of the NE and NW “walls.”

AS11-38-5564 and M149377797 Ziggurat Location, A

AS11-38-5564 and M149377797 Ziggurat Location, part A

AS11-38-5564 and M149377797 Ziggurat Location, B

AS11-38-5564 and M149377797 Ziggurat Location, part B

AS11-38-5564 and M149377797 Ziggurat Location, C

AS11-38-5564 and M149377797 Ziggurat Location, part C

AS11-38-5564 and M149377797 Ziggurat Location, D

AS11-38-5564 and M149377797 Ziggurat Location, part D

And, here’s the image with the alleged ziggurat so you can compare and see that we’re talking about the same region in the Apollo AS11-38-5564 region.

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Mike of course makes my point then, since this is where his ziggurat is: “What he [Stuart] points to as the “feature” is … simply a hill and a crater next to (“behind”) it. … It’s obvious from comparing the LROC map on the web page he links to that we he thinks is the Ziggurat – or what he asserts to his “fans” is the Ziggurat – is actually just an “X” shaped feature some small distance away.”

Since that IS the location of the feature, Bara has really made my point: What I pointed it is a natural feature. Ergo, since what I pointed to is where his ziggurat is, and his location is wrong, the ziggurat is not a real feature.

Final Thoughts on This Issue

I had done my own analysis originally, way back in July, to find the location. That’s how I found the location in lat/lon. I had matched up about a dozen craters to do so. I happen to post GoneToPlaid’s versions above because I think he shows an excellent job in a good, easy-to-see presentation style.

Mike is showing four points that are incorrectly linked up to the overhead non-oblique shots in this case, and he has a few others in other places on his blog post. His craters are actually correct in his “Missed it by that Much” image on the above-linked blog post, but it is not in the next image.

I’m surprised that this is actually an issue, though perhaps I shouldn’t’ve been. Anyway, as is now I hope very clear, my initial placement of the ziggurat region was correct, Mike’s location is clearly not.

This doesn’t prove/disprove the ziggurat at all, but it does show more incorrect image analysis.

One could ask at this point why I keep talking about this. In fact, some have, on both sides of the “issue.” The reasons are several, and you can read much more on my thoughts on this in the comments section of this post, starting with Tara’s post.

But to briefly summarize, with every post I have made on the topic, I’ve tried to address this from a critical thinking standpoint as well as show how you can go searching for information on your own and figure out what’s going on. There are also numerous misconceptions floating around throughout this and they’re common, and they don’t just apply to this tiny, insignificant “issue.” For example, in this post I showed you how you can go do your own independent analysis to figure out where an image is on the lunar grid. Maybe that’ll be useful in Jeopardy some day.

Almost everything I’ve talked about is applicable to a much broader array of things, and also, I think, this process is important to show how to investigate claims. And, since every scientist has to be able to convince their own colleagues of their results, explicitly being able to “get all your ducks in a row” is an ongoing learning experience for my own career.

In terms of “What’s the Harm?”, in this kind of stuff, there really isn’t too much specifically. You can believe whatever you want. If you want to believe there’s a ziggurat in some location on the moon built by ancient aliens or whatever, fine, I really, honestly don’t care. I had never heard of the “Brookings Institute report” before I listened to Coast to Coast and heard Hoagland talk about it, and I can almost guarantee you that the vast majority of astronomers have never heard of it, either. But more on that in (probably) part 3.

But, when you then spend money on this kind of stuff, such as the people who gave money to send Richard Hoagland to test hyperdimensional physics stuff in Egypt during the Venus transit but then he didn’t go and hasn’t published anything on it, well, I see that as harm. Yes, it was those peoples’ money and they can do what they want with it, but if they made the choice to send Richard $100 instead of buying groceries for a week (as one message going around has claimed, though I don’t know if it’s real or not), that’s a problem.

Part 2 to come …

August 16, 2012

Podcast Episode 48: Image Processing and Anomalies, Part 2


Alrighty, episode 48 has been posted, and the companion video has been expectedly delayed.

This episode is almost as long as part 1, and I still left stuff out. Sharpening and filters and stuff like that is going to wait for a later episode. The topics discussed this time are: Dynamic range, noise, rotation and resizing, and levels, curves, and contrast.

The bottom-line with this episode is that even the most seemingly innocuous adjustments – like Auto Levels, or rotate by 10°, or increase the size by 50% – are going to change the information that was originally there, and often it will do it destructively such that you cannot make the reverse change and get the original back.

There was a bit of feedback this time and a discussion of some write-ins for the puzzler last time.

Kaguya (SELENE / かぐや) Photographs of the Moon, Specifically the Claimed Ziggurat Area


Introduction

After this point, Mike needs to answer the basic question of: What would it take to falsify your claim?

It’s a basic question that every person should always ask of anything, including their own beliefs. I’ve explained several times what it would take to falsify my claims that Mike’s ziggurat claim is false. Each time Mike has posted something new about it, he has generally ignored my previous rebuttal as “silly” or “twaddle” or some other such thing, either outright stating (at least once) or implying (several times) that my analysis would be easy to show was wrong, and yet he has not done so.

The Parry This Time

…[H]e’s implying that there are images from “non-NASA” missions which don’t showthe [sic] Ziggurat on them, and further, that he has seen them. How else could he claim they “don’t show the feature” if he hasn’t seen them? If true and these images exist, then he should produce them. The burden of proof is not on me to produce them, it’s on him. He’s the one claiming they exist, not me.

…If there are such “non-NASA” images, then produce them, otherwise shut-up about them and admit you BS’d your readers into thinnking [sic] they ever existed in the first place.

On a small part of this, I would actually agree: I did make the claim that there are non-NASA images that cover the site, and so the burden of evidence is upon me to show that.

In fact, it was the second of my main three points as to why I think that the ziggurat is not real: “2. Why other images of the same place taken by several different craft (including non-NASA ones), including images at almost 100x the original resolution of the Apollo photo, don’t show the feature.”

Though, clearly, I was NOT necessarily saying that non-NASA craft had imaged it at 100x the original Apollo.

Of course, Mike misses the point that it is up to him to prove the INITIAL claim that the ziggurat is real when he found it on a video game forum.

Kaguya / SELENE / かぐや

Kaguya was the nickname of the Selenological and Enginering Explorer (SELENE) spacecraft to the Moon, built and launched and operated by the Japanese Aerospace Exploration Agency (JAXA) that flew for several years, 2007-2009. It had several cameras on it, and it was the first to image the Apollo landing sites and actually show something from the missions due to its high resolution of up to 10 meters per pixel (actual pixel scale depended on orbit and instrument).

Using their online data search and retrieval system, you can (and I did) search for and find several images that cover the site. Among them are the following. Note: JAXA is picky, and you MUST go to their main page, agree to their terms, click Start and then you can view the links below.

To remind you, the Apollo photo has a pixel scale of ROUGHLY 65 meters per pixel at that location.

Example Image

I’ve downloaded those six and contained within the obtuse file format (see this link for dealing with it) is the JPG thumbnail. Within the two files at 10 mpp, you have the IMG file that can be read with ISIS.

Here’s one of them, full-res of the target region (again, reason for the wavy edges is the geometric correction I’ve talked about many times before). Make sure you click to enlarge.

DTMTCO_03_05874S092E1744SC

DTMTCO_03_05874S092E1744SC with “Ziggurat” Area at Full-Resolution (click to enlarge)

That’s at nearly 7x the pixel scale of the Apollo photo.

In my post from early yesterday morning, I gave you the following context image of NASA’s Lunar Reconnaissance Orbiter Camera’s WAC and NAC:

WAC and NAC of Alleged Lunar Ziggurat

WAC and NAC of Alleged Lunar Ziggurat (click to enlarge)

So you know where the ziggurat is. Now we can also compare the WAC with the Kaguya image:

Alleged Ziggurat Area - WAC and Kaguya Comparison

Alleged Ziggurat Area – WAC and Kaguya Comparison

The sun angles are all somewhat different, though I gave you several other images at other sun angles from SELENE above.

Where Do We Go Now?

I’ve put many of my cards on the table. I think I’ve shown pretty well my points.

But at the same time, we have not progressed anywhere. Mike has not directly responded to any of my direct, specific points, critiques, areas where I explained that he was incorrect about some fundamental points of image processing and analysis (such as with noise), nor refutations/answers to his questions/conspiracies (such as the last one about the “Venetian Blinds” effect of all WAC images). He’s continued to maintain the NASA images are fake, and then insisted that I supply those from other agencies. I think it’d be hard to say that JAXA is under NASA control, or that JAXA painted the ziggurat area black, though I’m sure he’ll probably claim something like they cloned it out of the JAXA image. Hard to back that up considering that, as far as is possible to tell, it matches the other images of the site, along with the other images from Kaguya.

At this point, though, we’re really again at the question of: What does it take to falsify your beliefs? We can’t move forward if the answer is “nothing,” nor if the response to these SELENE photos is simply that it’s another part of the conspiracy.

I understand that Mike feels the need to defend this considering that he’s put so much effort into it and made it a centerpiece of his book due out in October. But seriously – again – I think that to any objective observer I’ve proven my point and Mike has failed to prove his.

August 15, 2012

Understanding Lunar Reconnaissance Orbiter Wide-Angle Camera Images


Introduction

In an update to Mike’s blog post from yesterday, Mike displays further lack of reading comprehension plus an inability to understand images and image processing — something that he claims to be better at than I.

Another Conspiracy Claim

The crux of Mike’s bone this time is that the WAC image I linked to has a “Venetian Blinds” effect going on. Why?

Well, Mike says he’s an engineer, so one would think that he would know of the ways to look into this. I’ll help those of you who don’t have Mike’s expertise that he did not exercise: The camera employs 7 filters, and they act like a grating, spreading the light out across the detector. It’s just how the image was recorded. I happen to use command-line software to reconstruct the images, and it can be fairly obtuse. But, 10 seconds of Google searching shows that there’s apparently easy-to-use freeware software out there to do this all by yourself.

If you’d like to read more about it, here’s the official journal paper outlining the craft and its instruments. If you do a google scholar search, then you can find a free PDF copy of it. Here’s a paper specifically on the camera, but I don’t see an obvious link for a free copy.

To quote from the 2007 paper:

The seven-band color capability of the WAC is provided by a color filter array mounted directly over the detector, providing different sections of the CCD with different filters acquiring data in the seven channels in a “pushframe” mode. Continuous coverage in any one color is provided by repeated imaging at a rate such that each of the narrow framelets of each color band overlap.

Every WAC image looks like that coming raw from the LROC website, though I also gave you a link to the global mosaics where you can look at the region yourself, on your own, without needing to assemble the WAC. Again, the coordinates are 174.34°E, -8.97°N.

So to recap: That’s how the WACs look, and it’s a simple matter to process them into a human-happy image. This has been in the literature at least since 2007, and if Mike bothered to look, he’d have seen that EVERY WAC image looks that way and requires reassembly. Why don’t they do that automatically for public consumption? I have no idea. Possibly because if revised algorithms come out to do an incrementally better job, they wouldn’t have to reprocess everything. Same reason the NACs are not properly georectified.

Contrast that with Mike’s conspiratorial ideas:

Hmm. I guess maybe the guys at NASA don’t want anybody sniffing around this area, do they? This is just more proof that you can’t trust digital images NASA produces. They must have posted this temporarily while they’re busy painting over the Ziggurat.

So the truth is, neither of the images he’s posted show anything like what he’s claiming, and they sure as hell don’t show the Ziggurat area in sufficient resolution to make a judgement about it.

Do you know what “truth” means? I mean, really? Another conspiracy? Pretty poor one considering that anyone who looks can easily figure out how to assemble the WACs. And anyone who looks can find out why they look that way.

Another Look at the LROC Images

Here, I’ll do more of your work for you. Here’s a screenshot of part of the NAC frame, from the link I gave before, that covers part of the area you claim the ziggurat to cover. I’ve even superposed part of the footprint of your ziggurat over the image, and this is far from full-res. (Note, this is a bit different from the footprint I showed towards the end of the video; I was a bit off then and a reexamination has led me to revise the approximate footprint. Figuring out exactly what’s going on between the oblique Apollo image and the rectified WAC/NAC images is a tad hard.)

NAC of Alleged Ziggurat Area

NAC of Alleged Ziggurat Area, Approximate Ziggurat Footprint in Green (click to enlarge)

The footprint above is obviously unconstrained off the left side of the NAC. But, here’s a family portrait where I think I have it better figured out:

WAC and NAC of Alleged Lunar Ziggurat

WAC and NAC of Alleged Lunar Ziggurat (Notice, None Present) (click to enlarge)

Let’s see, what else can I think of with what I’m showing that might give Mike a conspiratorial claim … okay, a few potential trivial things that could set the conspiracy-minded off:

  • The WAC has wavy borders for reasons I discussed in my last podcast episode — basically, it’s a topography and spacecraft pointing correction.
  • The ziggurat footprint is a weird shape because the original Apollo shot is very much oblique (a perspective) and when rectified to a lat/lon gird as if you’re looking down on it, it is elongated and not square — you can increase the height (and rotate by 180°) the Bara/Hoagland image by ~5x to get an idea of what it would look like.
  • North/South are flipped if you look at the images on the LROC website — again, that’s just how they’re sent back to Earth and automatically set up for the web interface, nothing conspiratorial as it’s clearly documented for anyone who looks.
  • On the ACT-REACT map that I linked to above, if you turn on NAC footprints, there does not appear to be one that covers the region occupied by the claimed ziggurat. This is because they are using an earlier set of footprints (this is a recent NAC), but if you use the search for the coordinates elsewhere on the site, you’ll find this one.
  • There are deep shadows because the sun was only 15° above the horizon when the image was taken. Since I have no idea why that image was taken (I’m not on the science/imaging team), I can’t guess as to why it was taken at that sun angle, but it’s entirely possible that it just happened to be a region not covered yet by NAC and they had a spare moment with the camera. But that lit part in the center of the NAC that I show is the left half of the claimed ziggurat (remember it’s rotated 180° in Mike’s version, so North is pointing down in his).

Final Thoughts … For Now

That’s about all I can think of, though I’m certain that Mike will find something else or just claim I’m lying and these images don’t show what I claim them to show or that I’ve now shown that the images he claimed were mythical now have two members but I need to find others. I guess we’ll see.

Oh, and it might be worth recapping at this point: This was never originally about Mike Bara. This was about a claim made by Richard Hoagland about an image he had that I then did a short analysis on and showed was likely hoaxed by someone. It’s turned into something with Mike because he has chosen to vehemently defend it, though his defense has been made of name-calling and conspiracies.

Next Page »

Create a free website or blog at WordPress.com.