Rebooting Computer Security
The NY Times asked the wrong question about the Obama
administration’s response to Russian hacking of the November US election ("U.S. Reacting at Analog Pace to a Rising Digital Risk, Hacking Report Shows"). The
question is not why did it took 16 months to develop a response, but what could
the US have done to prevent it? The disturbing answer is nothing.
Computers are fundamentally insecure, and this sad situation
is not going to change quickly. As someone who has spent his entire career computer
science research, it pains me greatly to admit that Donald Trump is right when
he told reporters “You know, if you have something really important, write itout and have it delivered by courier, the old-fashioned way. Because I'll tell you what: No computer is safe."
Computers’ original sin is that they run software that is
written by humans. People make mistakes at a predictable rate – roughly 10-20
defects per thousand lines of code. Testing can find and eliminate many of
these bugs, leaving a residual of 0.5 – 1 defects per thousand lines. This may
not seem high, but your computer is running hundreds of millions of lines of
software, so there may be hundreds of thousands of bugs on your computer.
In the past, these flaws were primarily an annoyance that
occasionally caused crashes and produced incorrect answers. Online reporting
tools (the popup that asks if you want to report a problem) made it possible to
identify flaws quickly and to fix the most common ones, so that fewer users
encounter bugs these days.
But, security is different. An attacker is malicious
adversary, who intentionally seeks out little-known flaws that can compromise a
computer. The most valuable of these flaws are “zero-day” bugs, which are
unknown to anyone but the attacker. The US-Israel Stuxnet attack on the Iranian
nuclear program used four zero-day bugs. These are a valuable commodity, sold
for hundreds of thousands of dollars and treasured by intelligence agencies.
The result is a very asymmetric playing field. The attacker
need only find (or buy) a small number of flaws to enable his attack, while the
defender is condemned to fight the last war by patching previously flaws and
deploying software that looks for suspicious patterns of activity that characterized
an earlier attack. Both remedies are valuable in protecting against the large
number of “me-too” hackers, but neither have proven effective against governments
or the elite hackers for hire.
In the end, security is a software problem. We have built
systems that are too complex for anyone to understand and consistently favored
features over security. Why are email attachments capable of running malicious
code that changes the software on a computer by installing malware? The answer
is that software developers have long built complete programming languages into
their data representations (the PDF and Word documents in the attachments), to
allow for future, unforeseen uses of their products. Once a programming
language becomes sufficiently expressive, it reaches a stage called
Turing-completeness, where it is as powerful as any other programming language
and even simple questions about programs are undecidable. Computer scientists
have not formally considered questions like “should I click on this
attachment.” But, my answer in most cases would certainly be no.
Software need not be this flawed. Carefully written software
controls airplanes, nuclear power plants, and other safety-critical systems.
While not perfect, this software performs limited roles with a high degree of
reliability and certainty. The process of writing this software is radically
different from the casual culture of software development in most of the
industry. Safety-critical software is slow to develop, limited in scope,
exhaustively tested, and rarely changed, quite different from Facebook’s “move
fast and break things” culture, which pervades the industry and is at the root
of the security problem.
The tremendous difficulty of writing correct, bug-free
software and the ease of updating software has led to a development culture in
which defects are expected and tolerated. Commercial and competitive pressures cause
companies to release software with many known (and even more unknown) bugs,
with the expectation that these defects can be corrected in subsequent
releases. Few are, as limited development resources are typically used to
develop new features, rather than fix “old” bugs.
Government and commercial research for decades has
investigated new programming languages and techniques for writing and verifying
software. The products of this research have moved slowly into use. For
example, smartphone applications are typically written in modern languages that
preclude some common attacks. And, more impressively, the security processor in
Apple’s iPhone is running a formally verified system. But, these are the rare
bits of good news from an industry that is resistant to deep change and
continues to produce new bugs at a tremendous rate.
Equally important, software developers need to be trained as
engineers, with the concomitant attitudes of careful evaluation of possible
failures and professional standards of responsibility.
Neither condition is likely to change on its own. The fast
and loose culture of the high-tech industry is too ingrained and enshrined in
the many examples of companies that shipped lousy software and went on to great
success (you know who they are...) for an individual or company to build and market a product on the
decidedly less sexy angle of having fewer bugs.
However, changing the underlying incentives might accelerate
improvement. First and foremost is to eliminate what software developers call
the “root cause.” All of the software licenses that you never read contain
clauses that disclaim a product’s suitability for any purpose and state that
its manufacturer offers no warrantee and has no liability for failures. An
absence of legal responsibility never made sense, but is even less desirable as
software becomes the common thread connecting people and increasingly the
physical world.
Second, is to emulate the aviation industry’s practice of
carefully and openly studying failures, which has steadily made flying far
safer by identifying and publicizing the causes of crashes. A few open source
software disasters such as the Heartbleed bug led to open discussion,
but most bugs are only seen by a small number of developers whose incentive is
to find a quick fix and move on to the next problem. As with aviation,
government sponsorship is necessary to balance liability against open inquiry.
More reliable, robust, and secure software will not appear
overnight. The change in software development practice will take a long time
and the amount of code that needs to be rewritten is huge. However, as
computers become even more embedded in the fabric of our lives and society, the
alternative to facing up to this problem is increasingly unthinkable.
Comments
Post a Comment