Exposing PseudoAstronomy

August 24, 2012

Let’s Talk About Image Noise and Detail


Introduction

Part 2 of N in my response to Mike Bara’s 5-part post on the lunar ziggurat stuff.

I’ve talked about these things before a couple times, including in my last podcast episode, but clearly some did not understand it and some did not clearly read what I stated. So let’s go through this very carefully.

These are important concepts and applicable to a wide variety of applications – not only in identifying pseudoscience, but also in understanding how digital images work, and the likelihood that you who are currently reading this has a digital camera is pretty high.

Image Noise, Gaussian

I’ll quote first from a previous podcast episode:

All photographs have an inherent level of noise because of very basic laws of thermodynamics — in other words, the fact that the atoms and molecules are moving around means that you don’t know exactly what data recorded is real. The colder you can get your detector, the less noise there will be, which is why astronomers will sometimes cool their CCDs with liquid nitrogen or even liquid helium.

That said, I haven’t really explained what noise is, and I’m going to do so again from the digital perspective. There are two sources of noise. The first is what I just mentioned, where the atoms and electrons moving around will sometimes be recorded as a photon when there really wasn’t one. The cooler the detector, the less they’ll move around and so the less they’ll be detected. This is purely random, and so it will appear in some pixels more than others and so you don’t know what’s really going on.

The other kind of noise is purely statistical. The recording of photons by digital detectors is a statistical process, and it is governed by what we call “Poisson Statistics.” That means that there is an inherent, underlying uncertainty where you don’t know how many photons hit that pixel even though you have a real number that was recorded. The uncertainty is the square-root of the number that was recorded.

… What’s the effect of noise when you don’t have a lot of light recorded? Well, the vast majority of you out there listening to this probably already know because you’ve taken those low-light photos that turn out like crap. They’re fuzzy, the color probably looks like it has tiny dots of red or green or blue all over it, and there’s little dynamic range. That’s a noisy image because of the inherent uncertainty in the light hitting every pixel in your camera, but so that it wasn’t completely dark, your camera multiplied all the light – the noise included – in order to make something visible.

With the idea of noise in mind, after an image is taken, there is only one way to scientifically reduce the noise without any guesswork based on a computer algorithm: Shrink it. When you bin the pixels, as in doing something like combining a 2×2 set of four pixels into one, you are effectively adding together the light that was there, averaging it, and so reducing the amount of noise by a factor of 2. …

Noise is random across the whole thing, and it makes it look grainy. A perfectly smooth, white surface could look like a technicolor dust storm if you photograph it under low light.

Now with diagrams!

Below is a 500 by 500 pixel image made of pure, random, Gaussian noise. I created the noise in software and gave it a mean of 128 (neutral grey in 8-bit space) and a standard deviation of 25, meaning that about 68% of the pixels will be within ±25 shades of 128, about 95% will be within ±50 shades, and about 99.7% will be within ±75 shades. Also included below is a histogram showing the number of pixels at each shade of grey. As you can see, it’s a lovely bell curve that we all know and love with a mean of 128 and standard deviation of 25 (actual standard deviation is 24.946, but that’s because we’re not using an infinite number of points).

500x500 Pixel Image of Gaussian Noise

500×500 Pixel Image of Gaussian Noise


Histogram of 500x500 Pixel Image of Gaussian Noise

Histogram of 500×500 Pixel Image of Gaussian Noise

Now, in the diagram below, I’ve binned everything 2×2. As in, it’s now 250 by 250 pixels. What happens to the noise?

250x250 Pixel Image of Gaussian Noise

250×250 Pixel Image of Gaussian Noise


Histogram of 250x250 Pixel Image of Gaussian Noise

Histogram of 250×250 Pixel Image of Gaussian Noise

The distribution of pixel values is still a bell curve, but it’s narrower. The mean is still 128. But, the width of the noise – the amount of noise – has decreased to 12.439 … very close to the theoretical decrease of 2x to 12.5.

Now, bin it 4×4:

125x125 Pixel Image of Gaussian Noise

125×125 Pixel Image of Gaussian Noise


Histogram of 125x125 Pixel Image of Gaussian Noise

Histogram of 125×125 Pixel Image of Gaussian Noise

The Gaussian distribution is narrower still, this time its width is 6.193, every close to the theoretical value of a reduction of 4x to be 6.125.

When I select a 100 by 60 pixel region of shadow in the ziggurat image, the width of the noise is ±1.66 shades. Binning 2×2 and it drops to 1.58, 3×3 drops to 1.41, 4×4 drops to 1.33, 5×5 drops it to 1.29, and 10×10 drops it to 0.87.

So, that’s what random noise is, and that’s what happens when you decrease an image – you reduce the noise. This is an unambiguous and inalienable FACT.

Image Noise, “Salt & Pepper” and Texture

Another type of noise is simply defective pixels, or, in the analog days, defective film grains or cosmic rays hitting the film. These manifest as single, individual pixels scattered throughout the image that are either very bright or very dark relative to their surroundings.

A related kind of noise is from digitized printed photos, and this is a texture. If you’ve ever scanned in something like a 100-year-old photograph (or a poorly stored 10-year-old photograph), you’ve likely seen this kind of noise. In fact, Mike says that this is his working hypothesis as to why the shadowed regions aren’t one solid color now: Photo album residue. Um, even if that’s the case, this is still technically noise because it’s masking the signal.

Image Noise, Removing

As I’ve stated, reducing an image size is one way to reduce noise. It does, however, remove detail. The reason this whole thing got started was that Mike stated, quite directly: “What Mr. Robbins didn’t tell you is that a large chunk of the “noise” that appears in the image he “processed” was deliberately induced – by him. … In fact, anyone who knows anything about image enhancement knows that scaling/reducing an image induces more noise and reduces detail by design.” (emphasis his)

We’ll get to what detail is in the next section, but quite clearly and directly, Mike states that reducing an image in size creates noise. That statement is factually incorrect. In his latest post (part 2 of 5), he wants to know why I reduced the image size at all if it means reducing detail (which is talked about below). If he bothered to read in context, the reason was so that I could line up the ziggurat image with the NASA one to figure out exactly where it is. They weren’t at the same scale, so one had to be scaled relative to the other. It was easier to reduce the size of the smaller ziggurat image than increase the size of the much larger full image, so that’s what I did. It really doesn’t change much of anything.

Anyway, moving on … So, how do you remove noise without removing information that’s there? In reality, you cannot.

The method of reducing an image in size is one way, but clearly that will remove detail, and when you do this with a small image, you don’t necessarily have that detail to spare. Though as I’ve talked about before, astronomers will often use this method because it is the ONLY way to NOT introduce algorithm-generated information into the image.

Otherwise, there are several other methods that can be used to reduce the noise, but all of them will reduce the actual signal in the image to some extent. Depending on the exact algorithm and the exact kind of image you’re working with (as in, is it something like a forest versus clouds versus sand), different algorithms work better to preserve the original detail. But, you will always lose some of that detail.

One algorithm that’s easy to understand is called a “median” algorithm. This is an option in Photoshop, but it’s not the default “Reduce Noise” filter (I do not know and couldn’t easily find what the algorithm used by Photoshop is by default – it’s probably some proprietary version of a fancier algorithm). The median method takes a pixel and a window of pixels around it. Let’s just say 1 pixel around it to keep this simple.

So you have a pixel, and you have all the pixels that it touches, so you have 9 pixels in total. You then take the median value, which is the middle number of a sorted list. So if the pixels in your 3×3 block have values 105, 92, 73, 95, 255, 109, 103, 99, 107, then the median of those is 103 because that’s the middle number once you sort the list. You’d save that to the new version.

You would then move one pixel over in the original version and save the median of a 3×3 block with that one at the center to the new version. And so on.

Why median instead of average? Because that way hot pixels and dead pixels don’t affect you nearly as much. That pixel value of 255 would be a hot pixel in that 3×3 block and it would make the average 115 as opposed to the median, 10.5% dimmer. If, say, the 109-valued pixel were also hot, and it was 255, the median would STILL be 103, but the average would now be 132.

So that’s one method. The end result is that the outliers will be removed, and you’ve reduced the noise. Choosing a larger window reduces the noise more because you’re sampling a broader range of pixels from which to get a median (this is under the assumption that the number of hot and cold pixels is less than the number of good pixels).

But, in doing this, you are changing the information there, and every algorithm with which I’m familiar to remove noise will also remove some details. The details to go first are usually those small outliers that are real, like if you’re photographing a night scene and have some stars in your shot. Median noise reduction will remove those stars fairly effectively in addition to the noise. As I said, there are other algorithms that can be used depending on what exactly is in the image, but they will change the information that is there, and they will reduce detail by a measurable amount.

It should be noted that Mike’s default seems to be the Photoshop “Reduce Noise” filter. Here’s the result when he runs it on the image, ©his blog, with the “original” for comparison first:

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

Original Lunar Ziggurat Image from Call of Duty Zombies Forum

AS11-38-5564, with Ziggurat, Noise Reduction by Mike Bara

AS11-38-5564, with Ziggurat, Noise Reduction by Mike Bara

Ignoring the contrast enhancement, some of the noise is reduced a bit, but so is some of the detail (which is something to which he admits (“It’s a bit blurry”)). Once you lose that detail, you cannot get it back. Well, unless you go to a previous version.

Detail, Resolution, and Pixel Scale

Noise is not at all related to detail except in its ability to obfuscate that detail. Detail is effectively the same as resolution, where according to my handy built-in Mac dictionary, resolution is defined for images as: “the smallest interval measurable by a scientific (esp. optical) instrument; the resolving power. The degree of detail visible in a photographic or television images.”

Pixel scale is similar and related — it is the length in the real world that a pixel spans. So if I take a photograph of my room, and I take another photograph with the same camera of the Grand Canyon, the length that each pixel covers in the first is going to be much smaller than the length that each pixel covers in the second. The pixel scale might be, say, 1 cm/px (~1/2 inch) for the photo of my room, while it might be around 10 m/px (~30 ft) for the photo of the Grand Canyon.

Don’t see the difference? It’s really subtle. Here’s a comment I got from an anonymous reviewer (whom I figured out who it was) of a paper I wrote last year that explains it in a way only an older curmudgeony scientist can:

Citing “resolution” in m/pixel is like citing distance in km/s. Scale = length/pixel; resolution = length, as is a function of several parameters in addition to sampling scale. Nearly everyone in the planetary community gets this wrong, which makes the terrestrial remote sensing community think we’re idiots.

So, my point in going through these definitions, besides getting them clearly out there, is that, obviously, if you are reducing an image in size to reduce the noise, you are obviously also reducing the detail, resolution, and pixel scale. Or is it increasing the pixel scale ’cause your pixels now cover a larger area? Whatever the proper direction is, you get the idea, and to suggest that I implied or stated otherwise is wrong.

Another thing we can do in this section is compare the detail of the ziggurat image with the NASA version, which returns to one of my original points that the NASA version shows more detail.

This is not something that Mike is disputing. But to him, it’s just evidence of a conspiracy. He simply dismisses this by stating, “NASA has tons of specialized software and high end computing resources that could easily do many of [these things like adding detail].” As I’ve stated before, if Michael simply wants to go the “this is a conspiracy and no amount of evidence you give will convince me otherwise,” then we can be done with this – something I’ll address in another post shortly.

Otherwise, the simplest explanation for this is that the ziggurat version is a later generation after having suffered several copyings. This is not a known fact, rather it is an educated opinion based on the available evidence that’s not influenced by the conspiracy mindset that Mike and Richard have.

Final Thoughts on These Points

Throughout Part 2 of his five-part rebuttal, Mike accuses me of making straw man arguments (though he doesn’t use that term), while doing that exact thing to me — making straw men of what I said and arguing against them. I never stated that reducing an image makes it better overall, I stated that the noise will decrease and so the noise profile will be better (as in less). Whether interpolation “enhances” detail is a topic for something else and is not at all directly related to the veracity of this lunar ziggurat, so I’m not addressing it here.

Part 3 to come on dynamic range, shadows, and internal reflections. At the moment, a part 4 is planned to be the last part and it’s going to examine language, tone, mentality, funding, and the overarching conspiracy mindset. It might be my last post on the subject, as well.

 

P.S. Not that this is any evidence for anything whatsoever, but I thought I’d throw out there the fact that even the people on the conspiracy website “Above Top Secret” say this is a hoax by someone. Again, this is evidence of nothing, really, but I thought it a tiny intriguing twist at least worth mentioning. Kinda like the fact that even though almost all UFOlogists think that the Billy Meier story is a hoax, Michael Horn keeps at it.

Advertisement

Blog at WordPress.com.