The Event Horizon Telescope’s first image of a black hole showed a distinctive ring feature, but a reanalysis of their data has raised concerns over whether that ring of light is real New Scientist (May 19, 2022)
Our physics color commentator Rob Sheldon comments,
This analysis gets seriously into the weeds of data analysis and theoretical models. Before I start, let me make an analogy to “mRNA vaccines”. The innovations in the vaccine were numerous, and many were never before used, so that in many ways, it is not a traditional vaccine at all.
However, by naming it that way, most people thought it was something traditional and familiar, which gave them an unfounded trust in the shot. In exactly the same way, by calling this a “image” of the M87 core, taken by a “telescope”, most people associate this with “Astronomy Picture of the Day” which gave them an unfounded trust in the picture. It is nothing of the sort.
First off, radio telescopes do not operate like visible light telescopes. Rather they scan in frequency like a slit spectrometer moved over the image, and the image is then reconstructed or “inverted” from the data. Inversion is not a deterministic process, but can give very different answers depending on the inversion algorithm used and assumptions made. This has been known for 60 years.
For example, the CAT scans or MRI scans use the “Radon transform” to reconstruct an image from the many line-of-sight intensities through the body. Computers make this look effortless, but there are a lot of assumptions that go into this inversion. The more you know about the brain before hand, the better you can “steer” the computer in the right direction to get the MRI image back. But if a random sack of items were placed in the scanner, it is doubtful that the algorithms will “converge” on a consistent image.
Let me try another analogy. Suppose you took a picture of a person and sent it through a “SnapChat” filter to make it look old, then fed it into another to make it look young, another bald, another funhouse mirror and finally a cat filter. Could you tell who it was originally? Now your friend sees that and says “I know what to do. I’ll send a picture of everybody on your contact list through the same filters, and the right one will match!” But what if none of the pictures in the contact list were used? Will the best match be right? It depends whether the library of pictures was accurate or not, and whether the person closely matched one of them. That’s what the inversion problem is like. The more you know about the person, the better your chances of figuring it out, but there is no “deterministic” way to invert the snapchat filters.
Why is this a problem? Because we don’t know what the core of M87 is supposed to look like.
This is the problem with the “Event Horizon Telescope” which is actually a consortium of radio telescopes around the world who are combining their data to make a “virtual” telescope some 7000 miles wide. The wider it is, the smaller the details they can resolve, which is why they named it “Event Horizon” which is much, much smaller than their resolution, but they are gambling on using sophisticated computer models to regain the resolution. They took data on two objects—the core of the M87 galaxy, and the core of the Milky Way galaxy known as Sagitarius A* (or SgA*) The picture of M87 was easier to analyze and they released that “image” several years ago. It is this data analysis that the Japanese team is contesting.
Second problem. There’s dust between us and M87, and it is circulating, so the image is constantly changing. In the case of M87 the “circulation” time is about a week. In the case of SgA, the circulation time is a few minutes. That’s why they took 3 more years before releasing SgA image, which looks amazingly like M87, so one suspects they used M87 as the assumption of what SgA* should look like. Oh, and SgA* is imaged through the dust of the whole Milky Way edge on, whereas M87 was seen head on, perpendicular to the plane. I wouldn’t trust SgA* image to be evenly roughly correct.
Third problem, if we say “the object making the photons fits in this size box” we discover a “ring” roughly the diameter of the box we choose. But our resolution can only see crates much larger than the box we want to use. And if we make the box the actual size of our resolution (essentially an open box) we don’t get a ring. So now our image depends on the size of what we think the object looks like theoretically. This is circular logic, and even running thousands of computer models and noise models does not fix this circular problem (despite what modellers tell you).
Fourth problem. There’s some consistency checks we can perform. Does the small object fit nicely into the large picture we have from lower resolution telescopes? When we subtract the reconstructed image from the data, does the remainder (residual) look evenly distributed and small, like random noise should? The Japanese team argue that unlike the “ring”, their reconstruction fits in the bigger picture nicely. Likewise their reconstruction produces smaller and more even residuals than the “ring” image.
In fact, the only thing going for the original “ring” picture, was that it matched expectations. Which is another way of saying, it was a circular logic reconstruction. My claim is that this is too often true of numerous “scientific discoveries”, where we use computer models to help perform the reconstruction. This is true of “gravitational waves”, it is true of “global climate models” it is true of “evolutionary trees”, it is true of “tree ring dendochronology”. We have been deceived by the ease with which computer models can simulate real data, and as my retired nuclear engineer friend would say, “How do we know when our computer model is done? When we get the answer we want.”