A Letter to the Economist
The April 8, 2017 edition of the Economist had two articles
whose juxtaposition amused me. The first was an excellent story “Computers security is broken from top to bottom” and the second “How hospitals could be rebuilt, better than before” described the virtues of increased computer usage
in hospitals. I sent the Economist the following letter:
F.
Scott Fitzgerald wrote that a first rate intelligence is the ability to
hold two opposing ideas in the mind at the same time and still retain the
ability to function. I must commend the Economist for publishing “How hospitals
could be rebuilt, better than before” and “Computer security is broken
from top to bottom” in the same newspaper. More computers in healthcare;
what could go wrong? (see article 2)
More
seriously, several crucial segments of the world economy — finance,
communication, and transportation — can no longer function without
computers. In a few years, other important industries automobile and healthcare
most prominently will also move past the point where it is
possible to go back to a time where computers were tools under human control,
rather than autonomous entities where software flaws and attacks have
society-wide impact.
Perhaps
it is the time to address well-known and fundamental flaws in software before
we give hostile governments and criminal further leverage?
Your article
only briefly touches on the only plausible solution to this problem: a
top-to-bottom rewrite software in a disciplined manner
and verified to the greatest extent possible. This practice is
called engineering, and sorely lacking in current software
development.
They did not publish it (they published a mealy mouth
comment on the security article), but I did have an interesting exchange with
the Economist science editor, who lamented the limited technical depth of their
readers, a remarkably complaint given the sophistication and astuteness of the
original article, which he wrote.
The one aspect that I disagreed with is the editor’s belief,
expressed in the article, that making software subject to product liability
laws would be sufficient to solve the computer security problem. Part of my
exchange with him made this point:
So, I don’t think that the
disclaimers of suitability liability are going to disappear very quickly.
Companies will rightly say what is worse: no software or bad software? There
clearly is an opportunity for more reliable software to supplant the existing
ecosystem. Since we have no real way of measuring software reliability, this
software is going to need to be fundamentally different, and that difference is
verification. There are some good examples of an OS kernel from the Univ. of
New South Wales, a C compiler from INRIA, and systems software components from
Microsoft Research that have been verified and are usable replacements for part
of today’s software infrastructure.
Comments
Post a Comment