Exposing PseudoAstronomy

August 28, 2012

Dynamic Range and Shadows


Introduction

Part three of four posts in response to Michael Bara’s five-part post that allegedly destroys my arguments that the ziggurat on the moon is not real. Next post is already written (mostly) and will come out shortly, wrapping things up.

Dynamic Range

I really think I’ve covered this enough by this point, but I’ll do it briefly again.

Below is the “original” ziggurat image that Mike has linked to. Below that is a histogram of its pixel values. Note that this looks slightly different from what Photoshop will show the histogram to be. That’s because Photoshop fakes it a teensy bit. This histogram was created using very rigorous data analysis software (Igor Pro) and shows a few spikes and a few gaps in the greyscale coverage:

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum


Histogram of Pixel Values in Original Ziggurat Image

Histogram of Pixel Values in Original Ziggurat Image

The dynamic range available for this image is 8-bit, or 0 through 2^8-1, or 256 shades of grey (or 254 plus black plus white — semantics). The actual dynamic range the image covers is less than this — its range is only 12 through 169, or 157 shades of grey — just a little over 7-bit.

Compare that with the NASA image (whether you think the NASA image has been tampered with or not, that’s unimportant for this explanation), shown below. Its histogram spans values from 0 through 255, showing that it takes up the entire 8-bit range.

"Ziggurat" Area in NASA Photo AS11-38-5564

“Ziggurat” Area in NASA Photo AS11-38-5564


Histogram of Pixel Values in Original NASA Image of Ziggurat Location

Histogram of Pixel Values in Original NASA Image of Ziggurat Location

The immediate implication is that the ziggurat version has LOST roughly half of its information, its dynamic range. Or, if you’re of the conspiracy mindset, then the NASA version has been stretched to give it 2x the range.

Another thing we can look at is those spikes in the dark end and the gaps in the bright ends. I was honestly surprised that these were present in the NASA one because what this shows is that the curves (or levels) have been adjusted (and I say that with full realization of its ability to be quote-mined). The way you get the spikes are when you compress a wide range of shades into a narrower range. Because pixels must have an integer (whole number) value, rounding effects mean that you’ll get some shades with more than others.

Similarly, the bright end has been expanded. This means the opposite – you had a narrow range of shades and those were re-mapped to a wider range. Again, due to rounding, you can get some values with no pixels in it.

This can be done manually in software, or it can also be done automatically. Given the spacing of them, it looks like a relatively basic adjustment has been made rather than any more complicated mapping, for both the Call of Duty Zombies image with the ziggurat and NASA’s.

The fact that BOTH the ziggurat one and the NASA one have these gaps and spikes is evidence that both have been adjusted brightness-wise in software. But, taken with the noise in the ziggurat one, the smaller dynamic range, and the reduced detail, these all combine to make the case for the ziggurat version being a later generation image that’s been modified more than the NASA one (see previous post on noise and detail — this section was originally written for that post but I decided to move it to this one).

Dark Pixels, Shadow, and Light

What is also readily apparent in the NASA version is that there are many more black pixels in the region of interest. This could mean several very non-conspiracy things (as opposed to the “only” answer being that NASA took a black paintbrush to it).

One is what I have stated before and I think is a likely contributor: The image was put through an automatic processing code either during or after scanning, before being placed online. As a default in most scanning software, a histogram of the pixel values is created and anything darker than 0.1% is made to be shade 0, and anything brighter than 0.1% of the pixels is made to be shade 255. Sometimes, for some reason, this default is set to 1% instead, though it is also manually variable (usually).

Another part of this that I think is most likely is that, as I’ve said before, shadows on the moon are very dark. A rough back-of-the-envelope calculation is that earthshine, the only “direct” light into some sun-shadowed regions on the near side, is around 1000x fainter than sunlight would be. On the far side – and these photos are from the far side – there is no earthshine to contribute.

Which means the only other way to get light into the shadowed region would be scattering from the lunar surface itself. Mike misreads several things and calls me out where I admitted to making a mistake in my first video (Mike, how many mistakes have you made in this discussion? I’ve called you out on two very obvious ones in previous posts, and I call you out on another, below). Yes, you can get scattered light onto objects that are in shadow. If you have a small object casting a small shadow (such as a lunar module), then you have a very large surface surrounding it that will scatter relatively a lot of light into it. That’s why the Apollo astronauts are lit even when they are in the shadow of an object.

However, if you have a very large object – such as a 3-km-high crater rim – that casts a shadow – such as into the crater – then there is much less surrounding surface available to scatter light into the shadowed region. Also, remember that the moon reflects (on average) only about 10% of the light it receives*. So already any lunar surface that’s lit only by scattered light would be 10x fainter than the sun-lit part, and that’s assuming that ALL light scattered off the sun-lit lunar surface scatters into the shadowed parts to be reflected back into the camera lens, as opposed to the vast majority of it that just gets scattered into space.

*As opposed to Mike’s claim: “Since the lunar surface is made mostly of glass, titanium and aluminum, it tends to be very highly reflective.” Um, no (source 1, source 2).

Now, yes, there will still be some light scattered into the shadowed region, but it will be very little, relatively speaking, compared with the shadow of a small object, and it will be even less, relatively speaking, when compared with the sun-lit surrounding surface. For example, let’s look at AS11-38-5606:

Apollo Image AS11-38-5606

Apollo Image AS11-38-5606

This image was taken at a low sun angle, and there are a lot of shadows being cast. And look! They’re all very very black. The photographic exposure would need to be much longer in order to capture any of the minuscule amount of light scattered into the shadowed regions that were then scattered into the camera.

Now, before we go back to the ziggurat, let’s look at another part of this claim. Mike states: “I have seen hundreds, if not thousands, of lunar images where the shadows are far from “pitch-black (or almost pitch-black).””

In support of this, Mike points to images such as AS11-44-6609:

NASA Apollo Photo AS11-44-6609

NASA Apollo Photo AS11-44-6609

If you go to the full resolution version, you do see that the shadowed regions are not pitch black! WTF is going on!?

First, if you check the levels in photoshop, the 0.1% clip has either already been applied or it was never relevant to this image. So this does not falsify my previous statement of that being a possibility for the black shadows in the “ziggurat” one.

Second, let’s look at a few photos later, AS11-44-6612:

NASA Apollo Photo AS11-44-6612

NASA Apollo Photo AS11-44-6612

See that big crater up to the top? That’s the same one that’s near the middle-right in #-6609. Notice that instead of having a greyscale equivalent of around 25%, this time that very same shadow, taken just a few seconds or minutes later but at a different angle and part of the lens has decreased in brightness by over half. Meanwhile, shadows that are in roughly the same position of the frame (as in middle-right versus upper-middle) have a similar brightness as that shadow did in #-6609.

Also, look at the black space above the lunar surface (the right of the frame unless you’ve rotated it). The part of the sky near the top and bottom is ~5% black. The part near the middle is around 13% black. Or, 2-3x as bright, when space should be completely dark in this kind of exposure under ideal optics.

If you’re a photographer, you probably know where I’m going with this: The simplest explanation is that this is either a lens flare from shooting in the general direction of the sun, and/or this is grime on the lens causing some scattering. Less probable but still possible would be a light leak.

And, a closer examination of the shadowed areas does show some very, very faint detail that you can bring out, but only towards the middle of the image where that overall glow is.

Meanwhile, if you look through, say, the Apollo 11 image catalog and look at the B&W images, the shadows in pretty much every orbital photo are completely black. The shadows in the color ones are not.

As a photographer, this is the most likely explanation to me to explain AS11-44-6609 and images like it where Mike points to shadows that are lit:

  1. Original Photography:
    • Image was taken in the general direction of the sun so that glare was present.
    • And/Or, there was dirt on the lens or on the window through which the astronauts were shooting.
    • This caused a more brightly lit part of the image to be in a given location, supported by other images on the roll that show the same brightness in the same location of the frame rather than the same geographic location on the moon.
    • Some scattered light from the lunar surface, into the shadowed regions, off the shadowed regions, into the camera, was recorded.
  2. Image Scanning:
    • Negative or print was scanned.
    • Auto software does a 0.1% bright/dark clip, making the darkest parts black and brightest parts white. This image shows that effect in its histogram.
    • This causes shadows at the periphery to be black and show no detail.
    • Since the center is brighter, there’s no real effect to the brightness, and the very faint details from the scattered light are visible.

Contrast that with AS11-38-5564 (the ziggurat one), which has even illumination throughout. A simple levels clip would eliminate all or almost all detail in the shadowed regions. And/or, the original exposure was somewhat too short to record any scattered light. And/or the film used was not sensitive enough, which is bolstered as a potential explanation by what I noted above – that orbital B&W photography from the mission shows black shadows while orbital color shows a teensy bit of detail in some of the shadows.

In my opinion, that is a much more likely explanation given the appearance of the other photos in the Apollo magazines than what Mike claims, that NASA painted over it.

Which after long last brings us back to the ziggurat. Even in Mike’s exemplar, the stuff in the brightest shadow are BARELY visible, much less-so than the wall of his ziggurat. I suppose if Mike wants to claim that the ziggurat walls are 100% reflective, plus someone has done a bleep-load of enhancement in the area, then sure, he can come up with a way for the walls to be lit even when they are in shadow.

Do I think that’s the most likely explanation, especially taken in light of everything else? No.

Final Thoughts on This Part

One more part left in this series, and by this point I’ve really addressed the main, relevant points in Mike’s five-part series.

Far from “destroying” my arguments, I think at the very, very most, he’s raised some potential doubt for one or two small parts of my argument that, taken individually if one is conspiracy-minded and already believes in ancient artifacts on the moon, then those individual doubts could be used to make it look like the ziggurat is real.

However, taken as a whole, and taken with less of a conspiratorial mindset and a mindset where you must provide extraordinary evidence for your extraordinary claim, and you must show that the null hypothesis is rejected by a preponderance of indisputable evidence, then the ziggurat is not real.

Advertisement

August 24, 2012

Let’s Talk About Image Noise and Detail


Introduction

Part 2 of N in my response to Mike Bara’s 5-part post on the lunar ziggurat stuff.

I’ve talked about these things before a couple times, including in my last podcast episode, but clearly some did not understand it and some did not clearly read what I stated. So let’s go through this very carefully.

These are important concepts and applicable to a wide variety of applications – not only in identifying pseudoscience, but also in understanding how digital images work, and the likelihood that you who are currently reading this has a digital camera is pretty high.

Image Noise, Gaussian

I’ll quote first from a previous podcast episode:

All photographs have an inherent level of noise because of very basic laws of thermodynamics — in other words, the fact that the atoms and molecules are moving around means that you don’t know exactly what data recorded is real. The colder you can get your detector, the less noise there will be, which is why astronomers will sometimes cool their CCDs with liquid nitrogen or even liquid helium.

That said, I haven’t really explained what noise is, and I’m going to do so again from the digital perspective. There are two sources of noise. The first is what I just mentioned, where the atoms and electrons moving around will sometimes be recorded as a photon when there really wasn’t one. The cooler the detector, the less they’ll move around and so the less they’ll be detected. This is purely random, and so it will appear in some pixels more than others and so you don’t know what’s really going on.

The other kind of noise is purely statistical. The recording of photons by digital detectors is a statistical process, and it is governed by what we call “Poisson Statistics.” That means that there is an inherent, underlying uncertainty where you don’t know how many photons hit that pixel even though you have a real number that was recorded. The uncertainty is the square-root of the number that was recorded.

… What’s the effect of noise when you don’t have a lot of light recorded? Well, the vast majority of you out there listening to this probably already know because you’ve taken those low-light photos that turn out like crap. They’re fuzzy, the color probably looks like it has tiny dots of red or green or blue all over it, and there’s little dynamic range. That’s a noisy image because of the inherent uncertainty in the light hitting every pixel in your camera, but so that it wasn’t completely dark, your camera multiplied all the light – the noise included – in order to make something visible.

With the idea of noise in mind, after an image is taken, there is only one way to scientifically reduce the noise without any guesswork based on a computer algorithm: Shrink it. When you bin the pixels, as in doing something like combining a 2×2 set of four pixels into one, you are effectively adding together the light that was there, averaging it, and so reducing the amount of noise by a factor of 2. …

Noise is random across the whole thing, and it makes it look grainy. A perfectly smooth, white surface could look like a technicolor dust storm if you photograph it under low light.

Now with diagrams!

Below is a 500 by 500 pixel image made of pure, random, Gaussian noise. I created the noise in software and gave it a mean of 128 (neutral grey in 8-bit space) and a standard deviation of 25, meaning that about 68% of the pixels will be within ±25 shades of 128, about 95% will be within ±50 shades, and about 99.7% will be within ±75 shades. Also included below is a histogram showing the number of pixels at each shade of grey. As you can see, it’s a lovely bell curve that we all know and love with a mean of 128 and standard deviation of 25 (actual standard deviation is 24.946, but that’s because we’re not using an infinite number of points).

500x500 Pixel Image of Gaussian Noise

500×500 Pixel Image of Gaussian Noise


Histogram of 500x500 Pixel Image of Gaussian Noise

Histogram of 500×500 Pixel Image of Gaussian Noise

Now, in the diagram below, I’ve binned everything 2×2. As in, it’s now 250 by 250 pixels. What happens to the noise?

250x250 Pixel Image of Gaussian Noise

250×250 Pixel Image of Gaussian Noise


Histogram of 250x250 Pixel Image of Gaussian Noise

Histogram of 250×250 Pixel Image of Gaussian Noise

The distribution of pixel values is still a bell curve, but it’s narrower. The mean is still 128. But, the width of the noise – the amount of noise – has decreased to 12.439 … very close to the theoretical decrease of 2x to 12.5.

Now, bin it 4×4:

125x125 Pixel Image of Gaussian Noise

125×125 Pixel Image of Gaussian Noise


Histogram of 125x125 Pixel Image of Gaussian Noise

Histogram of 125×125 Pixel Image of Gaussian Noise

The Gaussian distribution is narrower still, this time its width is 6.193, every close to the theoretical value of a reduction of 4x to be 6.125.

When I select a 100 by 60 pixel region of shadow in the ziggurat image, the width of the noise is ±1.66 shades. Binning 2×2 and it drops to 1.58, 3×3 drops to 1.41, 4×4 drops to 1.33, 5×5 drops it to 1.29, and 10×10 drops it to 0.87.

So, that’s what random noise is, and that’s what happens when you decrease an image – you reduce the noise. This is an unambiguous and inalienable FACT.

Image Noise, “Salt & Pepper” and Texture

Another type of noise is simply defective pixels, or, in the analog days, defective film grains or cosmic rays hitting the film. These manifest as single, individual pixels scattered throughout the image that are either very bright or very dark relative to their surroundings.

A related kind of noise is from digitized printed photos, and this is a texture. If you’ve ever scanned in something like a 100-year-old photograph (or a poorly stored 10-year-old photograph), you’ve likely seen this kind of noise. In fact, Mike says that this is his working hypothesis as to why the shadowed regions aren’t one solid color now: Photo album residue. Um, even if that’s the case, this is still technically noise because it’s masking the signal.

Image Noise, Removing

As I’ve stated, reducing an image size is one way to reduce noise. It does, however, remove detail. The reason this whole thing got started was that Mike stated, quite directly: “What Mr. Robbins didn’t tell you is that a large chunk of the “noise” that appears in the image he “processed” was deliberately induced – by him. … In fact, anyone who knows anything about image enhancement knows that scaling/reducing an image induces more noise and reduces detail by design.” (emphasis his)

We’ll get to what detail is in the next section, but quite clearly and directly, Mike states that reducing an image in size creates noise. That statement is factually incorrect. In his latest post (part 2 of 5), he wants to know why I reduced the image size at all if it means reducing detail (which is talked about below). If he bothered to read in context, the reason was so that I could line up the ziggurat image with the NASA one to figure out exactly where it is. They weren’t at the same scale, so one had to be scaled relative to the other. It was easier to reduce the size of the smaller ziggurat image than increase the size of the much larger full image, so that’s what I did. It really doesn’t change much of anything.

Anyway, moving on … So, how do you remove noise without removing information that’s there? In reality, you cannot.

The method of reducing an image in size is one way, but clearly that will remove detail, and when you do this with a small image, you don’t necessarily have that detail to spare. Though as I’ve talked about before, astronomers will often use this method because it is the ONLY way to NOT introduce algorithm-generated information into the image.

Otherwise, there are several other methods that can be used to reduce the noise, but all of them will reduce the actual signal in the image to some extent. Depending on the exact algorithm and the exact kind of image you’re working with (as in, is it something like a forest versus clouds versus sand), different algorithms work better to preserve the original detail. But, you will always lose some of that detail.

One algorithm that’s easy to understand is called a “median” algorithm. This is an option in Photoshop, but it’s not the default “Reduce Noise” filter (I do not know and couldn’t easily find what the algorithm used by Photoshop is by default – it’s probably some proprietary version of a fancier algorithm). The median method takes a pixel and a window of pixels around it. Let’s just say 1 pixel around it to keep this simple.

So you have a pixel, and you have all the pixels that it touches, so you have 9 pixels in total. You then take the median value, which is the middle number of a sorted list. So if the pixels in your 3×3 block have values 105, 92, 73, 95, 255, 109, 103, 99, 107, then the median of those is 103 because that’s the middle number once you sort the list. You’d save that to the new version.

You would then move one pixel over in the original version and save the median of a 3×3 block with that one at the center to the new version. And so on.

Why median instead of average? Because that way hot pixels and dead pixels don’t affect you nearly as much. That pixel value of 255 would be a hot pixel in that 3×3 block and it would make the average 115 as opposed to the median, 10.5% dimmer. If, say, the 109-valued pixel were also hot, and it was 255, the median would STILL be 103, but the average would now be 132.

So that’s one method. The end result is that the outliers will be removed, and you’ve reduced the noise. Choosing a larger window reduces the noise more because you’re sampling a broader range of pixels from which to get a median (this is under the assumption that the number of hot and cold pixels is less than the number of good pixels).

But, in doing this, you are changing the information there, and every algorithm with which I’m familiar to remove noise will also remove some details. The details to go first are usually those small outliers that are real, like if you’re photographing a night scene and have some stars in your shot. Median noise reduction will remove those stars fairly effectively in addition to the noise. As I said, there are other algorithms that can be used depending on what exactly is in the image, but they will change the information that is there, and they will reduce detail by a measurable amount.

It should be noted that Mike’s default seems to be the Photoshop “Reduce Noise” filter. Here’s the result when he runs it on the image, ©his blog, with the “original” for comparison first:

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

AS11-38-5564, with Ziggurat, Noise Reduction by Mike Bara

AS11-38-5564, with Ziggurat, Noise Reduction by Mike Bara

Ignoring the contrast enhancement, some of the noise is reduced a bit, but so is some of the detail (which is something to which he admits (“It’s a bit blurry”)). Once you lose that detail, you cannot get it back. Well, unless you go to a previous version.

Detail, Resolution, and Pixel Scale

Noise is not at all related to detail except in its ability to obfuscate that detail. Detail is effectively the same as resolution, where according to my handy built-in Mac dictionary, resolution is defined for images as: “the smallest interval measurable by a scientific (esp. optical) instrument; the resolving power. The degree of detail visible in a photographic or television images.”

Pixel scale is similar and related — it is the length in the real world that a pixel spans. So if I take a photograph of my room, and I take another photograph with the same camera of the Grand Canyon, the length that each pixel covers in the first is going to be much smaller than the length that each pixel covers in the second. The pixel scale might be, say, 1 cm/px (~1/2 inch) for the photo of my room, while it might be around 10 m/px (~30 ft) for the photo of the Grand Canyon.

Don’t see the difference? It’s really subtle. Here’s a comment I got from an anonymous reviewer (whom I figured out who it was) of a paper I wrote last year that explains it in a way only an older curmudgeony scientist can:

Citing “resolution” in m/pixel is like citing distance in km/s. Scale = length/pixel; resolution = length, as is a function of several parameters in addition to sampling scale. Nearly everyone in the planetary community gets this wrong, which makes the terrestrial remote sensing community think we’re idiots.

So, my point in going through these definitions, besides getting them clearly out there, is that, obviously, if you are reducing an image in size to reduce the noise, you are obviously also reducing the detail, resolution, and pixel scale. Or is it increasing the pixel scale ’cause your pixels now cover a larger area? Whatever the proper direction is, you get the idea, and to suggest that I implied or stated otherwise is wrong.

Another thing we can do in this section is compare the detail of the ziggurat image with the NASA version, which returns to one of my original points that the NASA version shows more detail.

This is not something that Mike is disputing. But to him, it’s just evidence of a conspiracy. He simply dismisses this by stating, “NASA has tons of specialized software and high end computing resources that could easily do many of [these things like adding detail].” As I’ve stated before, if Michael simply wants to go the “this is a conspiracy and no amount of evidence you give will convince me otherwise,” then we can be done with this – something I’ll address in another post shortly.

Otherwise, the simplest explanation for this is that the ziggurat version is a later generation after having suffered several copyings. This is not a known fact, rather it is an educated opinion based on the available evidence that’s not influenced by the conspiracy mindset that Mike and Richard have.

Final Thoughts on These Points

Throughout Part 2 of his five-part rebuttal, Mike accuses me of making straw man arguments (though he doesn’t use that term), while doing that exact thing to me — making straw men of what I said and arguing against them. I never stated that reducing an image makes it better overall, I stated that the noise will decrease and so the noise profile will be better (as in less). Whether interpolation “enhances” detail is a topic for something else and is not at all directly related to the veracity of this lunar ziggurat, so I’m not addressing it here.

Part 3 to come on dynamic range, shadows, and internal reflections. At the moment, a part 4 is planned to be the last part and it’s going to examine language, tone, mentality, funding, and the overarching conspiracy mindset. It might be my last post on the subject, as well.

 

P.S. Not that this is any evidence for anything whatsoever, but I thought I’d throw out there the fact that even the people on the conspiracy website “Above Top Secret” say this is a hoax by someone. Again, this is evidence of nothing, really, but I thought it a tiny intriguing twist at least worth mentioning. Kinda like the fact that even though almost all UFOlogists think that the Billy Meier story is a hoax, Michael Horn keeps at it.

Create a free website or blog at WordPress.com.