On Wednesday, January 3rd, there has been an announcement of two security flaws that affect Intel, AMD and ARM micro-chips, thus potentially affecting PC’s, telephones and a great many appliances alike. As a Yahoo News article reports:
“Phones, PCs, everything are going to have some impact, but it’ll vary from product to product,” Intel CEO Brian Krzanich said in an interview with CNBC Wednesday afternoon.
This is of course of direct interest to everyone, and it will be of more direct interest to many readers of UD, as many of us work with information technology. As well, it is illustrative of features of information and probability that will be of significant interest to design thinkers (and critics) as the case shows how configurations imply information and how probabilities can also carry information.
Operating System manufacturers and others have been racing to produce security patches, ahead of the public announcement. So far, the patches produced reportedly can reduce performance by up to a half in the worst case. In the longer term, hardware architectures will likely have to be redesigned.
Ars Technica reports:
Windows, Linux, and macOS have all received security patches that significantly alter how the operating systems handle virtual memory in order to protect against a hitherto undisclosed flaw. This is more than a little notable; it’s been clear that Microsoft and the Linux kernel developers have been informed of some non-public security issue and have been rushing to fix it. But nobody knew quite what the problem was, leading to lots of speculation and experimentation based on pre-releases of the patches.
Now we know what the flaw is. And it’s not great news, because there are in fact two related families of flaws with similar impact, and only one of them has any easy fix.
Both of the flaws are based on speculative execution of instructions, which is used to speed up effective processing in modern computers. In effect, processors give off key information that with clever timing can be accessed to potentially expose core device state. This in effect gives the combination to the processor’s “bank vault.”
Of these, Meltdown is so far — so far . . . ! — specific to Intel chips, as that company’s designs use especially aggressive speculative execution, and as Ars Technica further reports, it “uses speculative execution to leak kernel data to regular user programs.”
Intel chips allow user programs to speculatively use kernel data and the access check (to see if the kernel memory is accessible to a user program) happens some time after the instruction starts executing. The speculative execution is properly blocked, but the impact that speculation has on the processor’s cache can be measured. With careful timing, this can be used to infer the values stored in kernel memory.
Spectre is a more generic attack, and proof of concept investigations have shown that it affects all three main processor architectures. It probabilistically infers kernel states by using “speculation around, for example, array bounds checks and branches instructions” and so effects “information leakage due to speculative execution.” The further bad news is that:
Spectre doesn’t offer any straightforward solution. Speculation is essential to high performance processors, and while there may be limited ways to block certain certain kinds of speculative execution, general techniques that will defend against any information leakage due to speculative execution aren’t known.
Sensitive pieces of code could be amended to include “serializing instructions”—instructions that force the processor to wait for all outstanding memory reads and writes to finish (and hence prevent any speculation based on those reads and writes)—that prevent most kinds of speculation from occurring. ARM has introduced just such an instruction in response to Spectre, and x86 processors from Intel and AMD already have several. But these instructions would have to be very carefully placed, with no easy way of identifying the correct placement.
Of course the onward issue is, what else is out there in the fog, whether known and being studied, or exploited? We would be well advised to be extremely prudent. For example, if there is a backdoor into the system, it may be possible to capture encryption keys so that encoding data may not be enough protection. But of course, the effort to target is going to be an issue, and so the question we need to answer is, are we persons of interest. END