Exposing PseudoAstronomy

March 5, 2017

Podcast Episode 158: Getting Beyond the Photograph: Image Tricks with Dr. Tod Lauer


To peer beneath the
Photograph and uncover
What may be hidden!

Sorry for the delay, but I have an interview that’s over an hour this time on image processing. In past episodes, I have talked about how you can’t get any more information out of an image than what is in a single pixel. Dr. Tod Lauer is an astronomer who has worked on all kinds of telscopes and instrument data and has developed numerous image processing techniques over his career. In this episode, we discuss some of those and how to correctly – versus incorrectly – apply them to image data to get to the best representation of the original object, or what the image was trying to capture.

There are no additional segments in this episode, but the interview runs nearly 1hr 15min. This is also the episode for the second half of February. I’m very much hoping/trying to get the first half of March’s episode out before I leave on a trip on March 19. It will either be an interview on what’s a planet, or a normal episode on Apollo Hoax miscellaneous claims I never did an episode about.

R136 Star Cluster by the Hubble Space Telescope

R136 Star Cluster by the Hubble Space Telescope

December 31, 2016

Podcast Episode 154: Impact Crater Pseudoscience Mishmash


Impact cratering
Is neat, but crazies like to
Abuse the science.

To end 2016, we have some crater-related pseudoscience. This is an episode where I talked about three different claims related to impact craters and how two of them misuse and abuse impact craters as a way to make their brand of pseudoscience make sense, in their own minds. The third claim falls under the “bad headlines” category and I get to address the Gambler’s Fallacy.

I’m still experimenting with a new microphone setup and you can hear the audio change tone noticeably part-way through. That’s when I moved my computer from off to the side so I was talking into the side of the microphone to more in front of me so I was talking into the top of the microphone. I also have a new laptop and figured out that the clicking/crackling that’s been in some recent episodes is when I stop recording, start again, and for a few seconds, every fraction of a second, the computer just records nothing for a much tinier fraction of a second. In this episode, I spent an extra half-hour editing all those out so there’s much less of it.

Artistic Rendering of Asteroid Impacting Earth

Artistic Rendering of Asteroid Impacting Earth

December 15, 2016

Podcast Episode 153: What Is Radiation?


“Radiation” is
As common in life as ’tis
In pseudoscience.

This is one of those basic science episodes where I tried to provide solid background to a typically misunderstood concept that is beloved by pseudoscientists: Radiation. I go through what radiation is and is not, different kinds of radiation, what it means to say that something is ionizing vs nonionizing, and the effects of thermal radiation. It’s a longer episode, clocking in at 51 minutes.

There are two additional short segments in this episode, the first being logical fallacies where I discussed the nautralistic fallacy, and the second being feedback where I finally addressed Graham’s feedback about the Catholic Church and a round vs flat planet.

"Caution: Radioactive" Sign

“Caution: Radioactive” Sign

August 16, 2015

#NewHorizons #PlutoFlyby – The Pseudoscience Flows #10 — Crrow777 Thinks It’s ALL Fake


Introduction

I really don’t want to give this one much time. “Crrow777” as he is known on YouTube, or just “Crrow” in interviews, is (from what I can tell) rising somewhat in the conspiracy world for reasons that I don’t understand. Among other things, he thinks the moon (Earth’s moon) is a hologram.

I have listened to some of his material, and I have heard several of the interviews he has given. I think he believes what he is saying. I don’t know beyond that what his mental state may be.

For this and other reasons, not the least of which is that the claims he makes are insane, I don’t want to feed the birds beyond what I need to to quickly debunk his foray into Pluto and New Horizons.

I have seen two additional Pluto videos on YouTube of his that go beyond the first one he posted. I’m only going to focus on that first one: “Crow Images vs NASA Images – Pluto is Only at Disneyland.” His videos typically get on the order of 10,000 views. This one has nearly 100,000 because it was picked up by various news outlets who did want to give him more attention.

The Claim

It really boils down to this: Because he can get from Earth (what he thinks) are better images of Jupiter and Jupiter’s moons than what NASA was showing of Pluto from New Horizons several days before encounter, New Horizons is fake.

The Explanation: Very Basic, Middle School Math

He’s wrong.

First off, in his first video, he is fully focused on saying that Jupiter in his camera and telescope is better than Pluto from the LORRI instrument on New Horizons. In his second video, he commits the logical fallacy of Moving the Goalpost and claims that what he really was talking about was Jupiter’s moons, not Jupiter.

Let’s do some really basic math. Jupiter was near the opposite side of the sun as Earth in mid-July, meaning it was around 900,000,000 km from us. Pluto was very roughly 5,000,000,000 km from us, or around 5.5x farther.

Jupiter’s radius is about 71,000 km (on average). Pluto’s radius is around 1190 km. So Jupiter is around 60x bigger in size.

Take 60x bigger and 5.5x farther from Earth, Pluto is going to look around 330x smaller than Jupiter.

Okay, but what about from New Horizons? The first images that he complains about and said were an “insult to your intelligence” were from late May, when New Horizons was about 50,000,000 km away from Pluto, or about 18x closer than we were to Jupiter. Except, he wasn’t showing you LORRI images. He was showing you MVIC images, which have a much worse pixel scale.

It’s the second animation he shows, about 3:45 into the video, which is from LORRI from April, when New Horizons was about 110,000,000 km, or 9x closer than we are to Jupiter.

So, simple math: Jupiter is 60x bigger, New Horizons was 9x closer, so Jupiter would STILL, if the optics were all the same, be about 6.5x bigger than what he’s doing in his back yard.

Except, the optics are not the same. I don’t know the field of view of his specific telescope. The build of the telescope changes the field of view, as does the camera size. LORRI has a field of view of 0.3° (about 60% the size of Earth’s full moon). It also has a 1024×1024 pixel detector, or 1 megapixels.

Crrow777 looks like he was using a dSLR camera, which typically has around 20 megapixels. That means that his resolving power – the ability to see a certain number of pixels across a feature – is going to be around 4-5x that of LORRI (take the square-root of the number of pixels, which is area, to get length).

So, not only is Jupiter going to still be 6.5x bigger if the telescopes are the same, but due to the number of pixels in his camera, it will be about 30x more pixels across than how New Horizons is seeing Pluto.

Other Stuff

He also complains that he has city lights and an atmosphere to deal with. But, he’s using techniques which help get around that, which those LORRI images he was showing were not using.

He also (around 4:30 in the video) just starts to rant about the images being an insult to peoples’ intelligence. I think his basic misunderstandings are an insult to peoples’ intelligence.

He also complains (5 min) that these are “high resolution” from NASA but as he defines “high resolution,” meaning you can “get down and resolve detail on these things,” then under his definition – which is different from the term as NASA was using it – they aren’t.

Except they are. We could resolve features on months out that we had never been able to resolve before. And days out, which are the ones he complains about at that time stamp, we were resolving surface features. It’s not “junk” (his term). All because he doesn’t understand something doesn’t mean the incredibly hard work and dedication by hundreds of people was all fake.

Final Thoughts

Okay, I’ve gotten myself angry at this point. I’ve said my bit, but I’ll say it again:

Just because you don’t know basic math, basic optics, and basic technology doesn’t mean that everything is a conspiracy. Instead of everyone lying, maybe it’s YOU who needs to actually do a little extra work and learn something instead of acting crazy.

Post Script

I took a look at his second video. Nothing really new in it except probably 80% of it is ranting and raving about The Masons and that nobody should trust The Government. One of the very few new things in it was ranting that there were better than 1 Mpx cameras available at the time New Horizons was built. This ignores two things: You have to go to the initial proposal – not when the craft was built and certainly not launched – and you have to look at what is tried and true technology that is capable of surviving the much harsher environment of space (temperature extremes and radiation). You can’t just go to the local camera store, buy a camera off the shelf, and fly it to Pluto. Ranting about should’ve-been-able-to-do-that shows you know absolutely nothing about how space missions work and how the technology on those missions is selected, built, and tested.

I also took a look at his third, rather short video, claiming that the colorized full-frame Pluto images was faked because if you invert the colors and increase the levels, you see a blockiness around the edge of the disk. Again: All because YOU don’t know anything about what’s going on doesn’t mean it’s a fraud.

This was a lossy JPG B&W image, with MUCH lower resolution color data overlaid on it, and then saved and exported again with lossy JPG compression. If he had BOTHERED TO READ THE CAPTION, he would know this.

July 22, 2015

#NewHorizons #PlutoFlyby – The Pseudoscience Flows #6: Data Download


Introduction

I know I’ve promised other parts to this series, but this one will be quick* and I want to get it out there because it feeds into a lot of varied and various conspiracies related to NASA’s New Horizons mission to the Pluto-Charon system, and I’ve even seen many misconceptions on normal science blogs / websites (not to be named): Where’s the data!?

Deep breath people: It’s coming. Slowly.

*I thought it would be quick, but it turned out to be nearly 2000 words. Oops…

The Slowness of Spacecraft Data Transfer

Every space mission – save for one very recent, experimental one – relays data via radio signal. In other words, light. The amount of power that the spacecraft can muster goes into figuring out the data rate it can sustain. Think of it a bit like this: If you have the Bat Signal, but you were using a flashlight, you’d be lucky if someone could just see the flashlight aimed up at the sky. There’s no way they could see details of a bat cut-out. But if you use a really really bright spotlight, you can see it farther, and you can even stick a detailed bat cutout over its front and you can make out that cutout.

Perhaps a bad analogy, but that’s kinda the idea here: If you have a very strong signal, then you can include a lot of detail really quickly. If you have a weak signal, then the data rate is slower. Oh– better analogy: bad wifi reception. You know you have low signal strength when it gets really slow.

Moving on, the New Horizons REX antenna does not have a huge amount of power. New Horizons launched with less plutonium for power than originally intended, and it needs power for running the spacecraft. It has so little power for the antenna that only the 70 meter dishes in NASA’s Deep Space Network (DSN) are big enough to receive the signal at Earth, which is a paltry 3 * 10-19 Watts. (Compare that with a 100 W light bulb.) To me, first off, it’s amazing that we can even receive that faint of a signal.

But once you get over that amazement, the DSN also has to be able to detect changes in that tiny signal. That’s how we get data. Like blinking your flashlight in Morse code, or putting the Bat Signal stencil up. If we have very little signal strength, we can’t change our signal very quickly, or the DSN may not be able to read it. Change more slowly, then they will.

For planning purposes, we were able to send data at 1296 bits per second. I’m old enough (sigh…) to remember dial-up modems in the 1990s. My family’s first modem was the dreaded 14.4 kbps modem which was painfully slow at pulling up AOL’s e-mail. Or Hamster Dance. But even that was over 10 times faster than New Horizons’ data rate. And, let’s convert it to real things, bytes. There are 8 bits to a byte. 1296 bits per second is only 162 bytes per second. I have a thumbdrive attached to my computer that holds 64 GB, or 64 gigabytes. It would take about 4572 hours, at the average New Horizons download rate, to fill that fairly modest thumb drive. That’s 190 days.

Keep in mind that the spacecraft is still taking data. Keep in mind that there are only 3 70m DSN dishes at the correct latitudes to see the spacecraft, ever, from Earth. Keep in mind that there are other missions out there that need the DSN to communicate with Earth. Keep in mind that 1296 is an average planning bit rate, and while the Canberra and Goldstone dishes get more like 2000 bps, Madrid tends to get less due to the elevation of the spacecraft above the horizon.

So, from the get-go, just from considering the data rate (power requirements on the spacecraft, distance to the spacecraft, and timetable of receiving stations on Earth), one should be able to see that it will take a painfully long time to get the data from the spacecraft.

While we could keep up with the data rate and did a large download a month before encounter (which is why data weren’t taken in late May), there’s no way we could get all the data during encounter very soon after it, which is why the craft flew with two 8 GB storage drives, and it filled up 60 Gb during encounter (see what I did there, switching between bit and byte?).

There’s Other Data Besides Images!

And that’s any kind of data. There aren’t just images and “pretty pictures” that many of us want. There is one B&W camera on the craft, but there’s also a color camera, two spectrometers, a dust counter, two plasma instruments, the antenna itself took data, and there’s basic spacecraft housekeeping and telemetry that says things like, “Yes, I really did fire my thrusters at this time when you wanted me to!”

Basic Download Plan

I can discuss this because the basics have been made public. It’s just not “sexy” like pretty pictures so it’s not that easily findable.

Leading up to encounter, data were prioritized as though we were going to lose the spacecraft at any time, so the most important, “Tier 1” science data were downloaded first. And, critical optical navigation images.

After encounter, the same thing happened, where compression algorithms were used on the data on-board the spacecraft and that lossy-compressed data were sent back to Earth to fulfill as many Tier 1 science goals as possible. That’s how – and why – in the last week we’ve already revolutionized what we know about Pluto. Those first high-res (0.4 km/px) images of the surface were planned out based on Hubble Space Telescope maps of the surface and the spacecraft timing and trajectory to get images that cover different brightness and color patches. (Which takes care of another, minor conspiracy that I’ve seen that claims we “knew” where to point the cameras because the Secret Space Program had leaked us information about what would be interesting.)

But now that we’re more than a week from closest approach, thoughts are turning to what to do next. Originally, a “browse” data set of all the lossy data (only the imagers and spectrometers store lossy-compressed in addition to lossless) were going to be returned first, along with the lossless data from other instruments. That would at least let us at least understand the surface at a lossy JPG quality and for the plasma folks to do their science.

But now people are discussing scrapping that and bringing down the lossless data instead, albeit many times slower because of the larger file sizes.

Planning, Fairness

But, believe it or not, planning of what’s downloaded when is made no more than a few weeks out (except for the closest approach weeks). Right now, we’re working on the late August / September load of commands and deciding what data to bring down in what order.

Each of the four science theme teams (geology geophysics & imaging (GGI), atmospheres, composition (COMP), and particles & plasma (P&P)) puts together a list of their top priorities based on what we’ve seen so far. The Pluto Encounter Planning (PEP) team then sits down and looks at how much they can bring down in what time and puts things in order. The sequencers then take that and try to make it happen in the test computers. Then we iterate. Then it gets reviewed. Extensively. Only then does it get uploaded to the spacecraft to execute.

But besides that priority list, it’s the Principle Investigator who decides how much data each science team gets. For example, while I’m on PEP (it’s what I was initially hired to do), I’ve been adopted by GGI. Wearing my GGI hat, I want images from the LORRI instrument. All the time, and only LORRI. I don’t care what the plasma instrument PEPSSI recorded. But by the same token, the P&P folks don’t care anything about images, they want to know what their instruments recorded as the craft passed through the Pluto system to see how the solar wind interacted with escaping particles from Pluto – or even if it did. (Which it did, as was released in a press conference last Friday.)

So Alan Stern has to make the decision of how to be “fair” to so many competing interests within the large – and broad – science team. So while COMP may want to have 5 DSN playback tracks in a row to bring back just one of their very large spectra data cubes, Alan has to make sure that GGI gets their images and P&P gets their data, too.

The Plan

The decision was made several months ago that after this initial batch of data – what we saw last week, what we see this week – that all of the “low speed” data will come down in August. That’s housekeeping & telemetry, that’s things like how many dark pixels are in any given LORRI image, it’s the two plasma instruments and data recorded by the antenna and dust counter, and that’s about it. After that, we get back to the imagers and spectrometers, per the balance discussed above.

And since it’s not sequenced, and it’s not public, I can’t tell you any more than that.

So we are, unfortunately, not going to see any new images for practically a month, beyond the two navigation images that should come down tomorrow and Friday.

Conspiracy!

Due to the nature of this blog, obviously this is going to fuel conspiracies: NASA’s hiding the data, NASA’s manipulating the data, NASA’s [whatevering] the data, etc.

It’s just not true.

I have known for years that these conspiracies about NASA somehow intercepting the data and manipulating it before even us naïve scientists can get our hands on it would be very difficult, but being on this mission has made me realize that it’s even more difficult to somehow support that conspiracy than I had thought.

Literally, as the data are received by the DSN – before it’s even completely downloaded – it’s on our processing servers and in the processing high-cadence pipeline. On Monday morning when we were supposed to get four new images, we were literally sitting in the GGI room hitting the refresh button and marveling over each new line of pixels that we were getting back in practically real-time. To use a religious analogy, it was every Christmas morning rolled into a one-hour marathon of hitting the refresh button.

And we were all there watching — over 20 of us. And other science team members kept coming in to look.

The idea of secretly having one or two people intercepting the data, “airbrushing” things in or out of it, and only then giving it from On High to the scientists just shows how out of touch from reality conspiracists are. (By the way, I use the term “airbrushing” here because that’s how many conspiracists still talk. Obviously, no one is physically airbrushing things anymore — and I doubt anyone younger than 30 even knows what a real airbrush is.)

To sustain the conspiracy, I can only see one of two choices: (1) Either all of us scientists are in on it, in which case it becomes ridiculously large and unsustainable and scientists suck at keeping secrets about exciting new things, or (2) somehow there’s super secret advanced tech that intercepts the spacecraft signal and at the speed of light “airbrushes” things out and retransmits it to the DSN to get into our processing pipeline. Because we know when stuff is supposed to appear on Earth. Because we write the sequence that does it.

Final Thoughts

Not that I expect this to convince any conspiracy theorist of their folly. The lack of image data for the next month, and the lossy JPG data we have now all contribute to the little anomalies that don’t immediately make sense, and the average conspiracist can easily spin into something that it’s not.

July 20, 2015

#NewHorizons #PlutoFlyby — The Pseudoscience Flows, Part 4


An eagle-eyed Facebooker on the Facebook page for my podcast (thanks Warwick!) pointed this ‘un out to me on the Before It’s News website. Something like the equivalent of the Daily Mail but with more of a UFO bent.

Apparently there are huge cities on Pluto.

This was one of the pseudosciences that I knew going in was going to be prevalent, though I expected it to be more explicit first from Richard Hoagland and Mike Bara who are more vocal about their pareidolia and reading into image artifacts.

The entire crux of this guy’s arguments is that he sees a blockiness in the released images. He claims that he knows and can prove they are not JPEG artifacts for two reasons:

(1) He’s using a TIFF image and not JPEG, and
(2) the blockiness runs at a diagonal and not parallel to the image edges.

For the second reason, I have two words and two punctuation symbols for you: “Rotate & crop.” To add a a few more words: Most released full-disk images have been rotated such that the north pole is “up” in the image. The spacecraft didn’t take them that way. We rotated them to be consistent. Therefore, the original blocky compression artifacts run parallel to the image edges, but now they run diagonal because it’s been rotated! Pretty simple. Yet it has eluded this conspiracist.

Similar elusion is of a simple fact that all because your current file format is one type, that does not mean the original file format was that type. To be more explicit, all because the NASA press release you got this image from happens to be a TIFF on the NASA website, that does not mean that the original image downlinked from the spacecraft was not lossy-compressed JPEG. Which it was. No image downlinked from the craft since July 12 has been lossless, they have all been lossy. Via a 10:1 ratio, meaning they are very lossy. All because I can take a JPEG and use any image software to re-save it as a TIFF does not mean that TIFF will not contain those original JPEG artifacts.

The JPEG blocks are 8 pixels on a side, and many of the released images have been up-sized (I don’t know why, I argued against that, but I have no influence over NASA’s nor APL’s graphics people).

He also assigns the wrong image credit, as “NASA/JPL-Caltech/MSSS.” It’s “NASA/APL/SwRI.” That’s not hard to get right. It’s called reading the caption for the image that you take from NASA.

I’d say that this is one of the sillier conspiracies I’ve heard so far, but it’s really hard to choose. Especially with what’ll be parts 5 and 6.

Perhaps the best line from the video: “No one’s actually accused me of, uh, pixelation, yet, but I’m sure someone will. Uh, some— some spook, probably, some guy from MI6 will come on here … heh! Who knows? Or someone from NASA will try and debunk it. But we’ll see! We’ll see what they have to say.”

Ah…. if only I worked for NASA, or MI6. Maybe I’d drive a nicer car. But it doesn’t take someone from British Intelligence to tell the guy that IT’S JUST BEEN ROTATED!!!

P.S.: Also within the video are other various claims. Like a large hexagonal crater (no, that’s his mind trying to break a circle into line segments), and that NASA purposely brightened the image so that it washes out detail near the pole. No, that’s called the sub-solar point, which is where the sun is directly shining, so you can’t see any topography, only brightness differences of the actual material on the surface. It washes everything out.

June 23, 2015

Podcast Episode 134: Big Bang Denial


The Big Bang theory:
Tot’ly explains the cosmos?
Or, is it a dud?

This episode follows a big from the Black Hole Denial episode, but this time with another aspect of cosmology: The Big Bang. I was able to use a few old blog posts, too, that I wrote practically 7 years ago.

As mentioned, I’m now on a weird – though backdating – release schedule due to the piling on of work as the New Horizons craft nears Pluto. But I’m still trying to do 2 episodes/month, at least.

June 12, 2015

Are We on the Verge of Discovering an Earth-Like Exoplanet?


I announced awhile ago that I was on episode 347 of the Canadian, “The Reality Check” poscast where I talked about exoplanets and some hype — deserved or otherwise — about almost but never quite yet discovering Earth-sized exoplanets.

While they post a lot of links and other things on their website, they don’t post transcripts of what we actually talk about. Since I spent a solid many minutes writing and editing my segment’s text, I thought I’d post it here:

There’s lots of ways to talk about exoplanets, but I’m going to take the traditional approach and start with a very broad but brief overview of how we have found the few-thousand known extra-solar planets, or “exoplanets” for short. There are five main ways.

The most obvious is the most difficult: Direct Imaging. This is where you take your telescope and would look at a star and see the planet around it. This is almost impossible with current technology, and we have less than 20 exoplanets found this way. It’s so hard because the star is so bright relative to the planet and because most star systems are so far away. And obviously, if the planet is larger and farther away from the star, it’ll be easier to see.

The second main method has also only produced about 20 planets so far: Gravitational Microlensing. Einstein showed that large masses bend light, and we can see this in space when an object that’s far away passes behind a massive object that’s a lot closer. The light from the background object gets distorted and magnified, much like a lens … a lens caused by gravity. If the foreground object happens to be a star, and that star has a planet, then that planet can make a detectable contribution to the lensing, not only in amount, but in the exact shape of the lensing effect.

The earliest actual successful method was a special form of what’s called the Timing Method, specifically in this case, pulsar timing. Pulsars are incredibly dense stars called neutron stars, and we get a blast of radio waves every time one of its poles sweeps in the direction of Earth. These are so regular that any tiny perturbation can be detected and attributed to something weird, like a tiny planet tugging on it and so changing that regular spinning signal.

This is the same concept as the highly successful method that found the most exoplanets until a few years ago: Radial Velocity. The idea is that we normally think of a planet, like Earth, orbiting the sun. But it doesn’t really. It *and* the sun orbit a mutual gravitational point called the “barycenter” that is between the two. For Earth and the sun, that point is VERY close to the sun’s center, but it’s not quite in the center. That means that over the course of a year, as Earth goes around that point, the sun will, too (on the opposite side of that point). So, it will wobble very very slightly as it orbits the barycenter.

We can’t possibly observe this tiny tiny motion of other stars. BUT, we can use the light that star emits to do it by using the Doppler shift. That’s the phenomenon where if something is moving towards you, the waves it emits become compressed, and if it’s moving away from you, the waves get stretched out. The common example is a train whistle going from high to low pitch, but in astronomy, this is where the light is shifted to blue and then to red.

So, if the planet around another star is at its closest point to us, the star emits light and we see it all normal. As the planet starts to move away from us, the star starts to move very slightly toward Earth, and so its light will be very slightly blue-shifted. Then, the planet gets to its farthest point, and starts to move towards Earth, which means the star starts to move away, and we see its light red-shifted. This is an incredibly tiny effect, and the smaller the planet, the smaller the shift in the light. Or the pulsar timing change.

There was a lot of progress throughout the late 1990s and early 2000s in very high-resolution spectroscopy in order to get better and better at observing smaller and smaller planets. The easiest ones to observe are the largest because they make the biggest shift in the star’s light, and ones that are closest to their star are easier because you don’t have to observe as long. To observe a planet that has a 10-day orbit, you just have to observe that star for about a month from Earth to get decent statistics.

That’s why all the exoplanets discovered early on were what are called “Hot Jupiters,” since they were very large and very close to their stars.

The final method is the Transit Method. If a fly passes in front of a bright light, you can see a slight decrease in the light. If a bird passes in front of a light, you’ll see a larger decrease in the light. Same thing here: A planet passes in front of the star and temporarily blocks part of the light from the star that we would see at Earth. The big issue with this method is that you have to have the fortuitous geometry alignment where the planet’s orbit is just right so that it passes in front of its star as seen from Earth. The first one wasn’t detected until 1999, but a decade later, the dedicated spacecraft COROT and then Kepler were launched to look for these, monitoring the same fields of the sky, tens of thousands of stars, moment after moment, looking for those brief transits. In 2014, Kepler released over 800 planets discovered with this method, more than doubling the total number known, and that was on top of its other releases and, to-date, it’s found over 1000.

The transit method, despite the issue of geometry, is probably the best initial method. If you have the planet going in front of its star, then you know its alignment and you can follow-up with the radial velocity method and get the mass. Otherwise, the radial velocity method can only give you a minimum mass because you don’t know how the system is oriented, you only know that radial component of velocity, hence its name.

With the transit method, you can see how much light is blocked by the planet. Knowing the star’s type, you can get a pretty good estimate for the star’s size, and knowing how much light is blocked means you can get the cross-sectional area of the planet and hence its diameter. For example, Jupiter would block 1% of the sun’s light, and since area is the square of length, that means Jupiter is about 10% the sun’s diameter. Since the sun is a type G V star, we have a good model for its radius, though of course we know its radius very well because we’re in orbit of it. But that means not only can we get mass, but we can get size and density.

The transit method also lets us see if there’s a large atmosphere. If the light from the star instantly blinks down to the level when the planet passes in front of it, then any atmosphere really thin or nonexistent. If there’s a gradual decrease, then it’s extended. If its extended, we can follow-up with something like the Hubble Space Telescope and actually figure out what that atmosphere is made of by looking at what colors of light from the star are absorbed as it passes through the planet’s atmosphere.

And as with the radial velocity and timing methods, we know how long it takes to go around its parent star, and along with the star’s mass from what kind of star it is, we can get the distance of the planet from the star.

Okay, so much for a brief overview. But for me, I’ve left out a lot.

Moving on, it should be somewhat apparent that the bigger the planet, and the closer to its star, the easier it is to observe with pretty much ANY of these techniques, except direct imaging or microlensing where you want a big planet that’s far from its star. Big means big effect. Fast orbit means you don’t have to observe it for very long to show that it’s a regular, repeating signal best explained by a planet.

So, the question is then, can we detect an Earth-sized planet, and can we detect an Earth-like orbit? These are really two different questions and they depend on the technique you’re using. If we want to focus on a the two main methods – radial velocity and transit – then the unsatisfying answer to the second is that we do finally have good enough technology, it is just a matter of finding it. With the 2014 Kepler data release, there were over 100 exoplanets that are less than 1.25 Earth’s size. With the 2015 release, there are a total of 5 planets smaller than Earth or Venus, but they orbit their 11.2-billion-year-old star in just 3.6 to 9.7 days.

Even if we have observations for more than a year or two, for something as small as Earth, the level of signal relative to noise in the experiment is still pretty small, and you want a big signal relative to the noise. It’s best to build up multiple years’ worth of data to average out the noise to be able to really say that we have an Earth-like planet. For something like Jupiter, which orbits our sun in about 12 years, we’d need to observe at least two transits, meaning we’re just now approaching the time when we would have a long enough baseline of data with some ground-based surveys, but that’s also assuming we catch that planet for the few hours or days when it goes in front of its star versus the years and years that it doesn’t, and that we do this repeatedly and don’t chalk it up to sunspots.

This is why we really need long-term, dedicated surveys to just stare at the same place in space, constantly, measuring the light output of these stars to see if we can detect any sort of dimming, that’s repeated, from a likely planet.

But, even if we find an Earth-like planet in terms of mass and diameter and location in its solar system, that’s not enough to say it’s Earth-like in terms of atmosphere and surface gravity and overall long-term habitability. It’s just a first step. A first step we have yet to truly reach, but one that is reasonably within our grasp at this point.

But it’s from the existing planets we know of that we get some of the hype that hits the headlines every few months, like “astronomers estimate billions of Earth-like planets exist in our galaxy alone.” I’m not going to say that’s fantasy, but it’s loosely informed speculation based on extrapolating from a few thousand examples we now have from a very, VERY young field of astronomy.

Or, we’ll get articles where the first sentence says, “Astronomers have discovered two new alien worlds a bit larger than Earth circling a nearby star.” It’s in the next paragraph that we learn that “a bit larger than Earth” means 6.4 and 7.9 times our mass, and they orbit their star in just a few days.

So as always, this is a case where, when we see headlines, we need to be skeptical, NOT susceptible to the hype, and read deeper. But that said, it is entirely possible that any day now we will find an exoplanet that is at least like Earth in mass, size, and distance from its host star.

May 26, 2015

Podcast Episode 132 – In Search Of Planet X (Live from Denver ComicCon)


In Search: Planet X.
An overview of common
Ideas about it.

This episode is another recording of one of my live presentations, modeled a little after Leonard Nimoy’s “In Search Of” television series. It was presented in front of a live audience at the Denver ComicCon on May 24, 2015, to about 75-100 people. I was bordered on two sides by other sessions that had more people and a lot of laughter, so I played to that a little bit when there were opportune moments. I also suffered a minor A/V issue in the middle but recovered, so you’ll hear some fumbling there.

Unfortunately, there is also some popping that comes in about 10 minutes into the recording. I exploited all the filters that I know of in my Audacity toolkit, and they are less of an issue than they were, but they are definitely present.

I also need to announce that it is that time of year when work is going to get crazy, so episodes may come out a little less regularly, especially during July. I’m still going to keep to the two per month schedule, but they may not be out on exactly the first and sixteenth of the month.

And with that in mind, I have to head to the airport in 45 minutes for more work, after just being back home for 3.5 days. So …

April 4, 2015

Next Page »

Create a free website or blog at WordPress.com.