What laws should be created to improve computer security?
David A. Wheeler
2018-10-16 (originally 2016-10-03)
In 2016 the website KrebsonSecurity
was taken down by a large
distributed denial-of-service (DDoS) attack.
Soon afterwards the
source code for the attack (Mirai) was released.
The malware spreads to vulnerable devices by continuously scanning the
Internet for Internet of Things (IoT) systems protected
by factory default or hard-coded usernames and passwords.
The attack was so bad that
Akamai gave up protecting the site.
More recently, on 2016-10-21 many large sites became inaccessible
due to a massive DDoS attack
(see, e.g., "Hackers Used New Weapons to Disrupt Major Websites Across U.S." by Nicole Perlroth, Oct. 21, 2016, NY Times).
The "Internet of Things" is really the
"Internet of painfully insecure things".
Trying to fix one system simply won't cut it.
We need to find broad solutions to the widespread problem of
insecure devices.
Insecure devices are essentially electronic pollution.
And while DDoS is really bad, there are other security-related problems;
we need to also address them, or attackers will simply switch their approach.
I think we could make some targeted laws and/or regulations
to help counter the problem.
These need to be targeted at countering widespread problems, without
interfering with experimentation, without hindering free expression or
the development of open source software, and so on.
This is fundamentally an externalities problem (the buyers and sellers are not
actually bearing the full cost of the exchange), and in these cases
mechanisms like law and regulation are often used.
It's easy to create bad laws and regulations - but
I believe it is possible to create good laws and regulations
that will help.
Here are a few ideas.
First, a few that aren't targeted at device/website makers:
- Network Ingress Filtering.
Require all Internet Service Providers (ISPs) and any
devices they configure to implement
network ingress filtering as defined in IETF BCP38.
This doesn't prevent attacks, but it counters spoofed addresses, making
it much easier to track to problems so they can be fixed.
This is noted in
Krebs' "The Democratization of Censorship".
This is certainly not new.
I argued for network ingress filtering as
a key first step back in
October 2003 in my paper
"Techniques for Cyber Attack Attribution" (IDA Paper P-3792),
as a relatively-easy way to make it easier to find attackers.
I'm guessing this can be done in the U.S. by regulation
(I would expect the FCC to be able to enforce this).
- Secure Software Education.
Universities that take federal funding, and teach courses
on software development, must include education on
secure software design principles, as well as common types of implementation
vulnerabilities and how to counter them.
I have a freely-available
book and some
class materials
if you'd like!
- Fix the CFAA.
In the U.S., the Computer Fraud and Abuse Act (CFAA)
was enacted by Congress in 1986 as an amendment to existing computer fraud law
and has been revised many times sense.
One serious problem is that it's extremely vague, leading to
criminal prosecutions that are unwarranted and harm the public.
For example,
it was used to persecute Aaron Swartz, who subsequently committed suicide.
Indeed, it's now being used to
persecute whistleblowers about vulnerabilities, reducing
our security.
I mention the CFAA here not only because it's an unjust law
(it is), but because it's now interpreted as
criminalizing behavior we need to encourage.
The CFAA's problems are widely known, so I won't go into them here.
You can see many discussions and recommendations elsewhere such as the
EFF
computer crime law and the related
EFF CFAA reform page,
the article
"Fixing the Worst Law in Technology" by Tim Wu (March 18, 2013, New Yorker),
"Congress Has a Chance to Fix Its Bad 'Internet Crime' Law: It’ll probably blow it" by Justin Peters (2015), Slate, and
"It’s Time to Reform the Computer Fraud and Abuse Act" by James Hendler, Scientific American.
Then, require that all devices sold and all
revenue-producing services:
- No default passwords or certificates.
Do not have default passwords or default access certificates
could be used to access them over a wider internet and
that could be determined by anyone else.
Rationale:
There are many alternative ways to control access.
A simple alternative is to require that devices, when they first turn on,
can only communicate on a LAN, and then the user can set a password.
If users forget the password, they can force the device to reset - but that
would require physical access, and then again the device couldn't access
anything beyond its local LAN until it had its password reset.
Similarly, devices with a single backdoor password are dangerous - whoever
learns the password controls all of them.
There's a recent California law that goes this way.
- Minimum password complexity.
By default, include a
minimum password complexity (no more "password" or "1234")
so that password-guessing (using smart systems) will take at least 50 years.
Minimum length is the main issue here;
NIST says they should be at least 8 characters.
Rationale:
Trivially-guessed passwords have essentially the same characteristics as
known passwords.
"By default" is important - you could disable this for special cases,
but if it requires extra work, most people will just make a better password.
Passwords have their own problems and risks, but they aren't going away soon -
so let's make sure that they're stronger.
- Iterated salted hashes.
Store passwords (if any)
as at least iterated salted cryptographic hashes
if they are only used to verify incoming authentication requests.
Rationale:
This is widely-accepted minimum practice.
This way, even when data is extracted from a system,
determining each actual password will be slow
(because these are intentionally designed to be hard to reverse).
This is focused on incoming requests; if devices have to log into other
devices, they sometimes have to store information somewhere, and sometimes
it's hard to avoid storing passwords "in the clear" in those cases.
But we can at least counter problems where password data gets concentrated.
This is focused on passwords; some systems use asymmetric cryptography,
which don't have this problem (and so this rule wouldn't apply).
However, there are many reasons password-based systems
persist, so let's reduce their weaknesses.
- Mandatory Updates.
For any internet-connectable device sale,
security vulnerabilities must be either fixed at no charge
for at least 3 years or the customer must be offered a refund.
By default these fixes must be automatically provided and installed over
the Internet if there is an Internet connection.
It must also be possible for a customer to delay or disable these updates
if the customer chooses to do so.
Rationale:
The big problem with devices (like Android phones)
isn't the number of known vulnerabilities
when their release, but the irresponsible behavior of the manufacturers
after release.
We require carmakers to do recalls in certain cases, and updating for
vulnerabilities is much cheaper; let's apply the same logic.
We could make a variant "at least 3 years, and at least 7 years
while there are at least 10,000 Internet-connected devices"... that way,
as long as there are many devices, manufacturers have to update them.
Updates create a risk, of course, since malicious organizations
(including malicious governments) could take over the supplier's systems
and send a malicious updates.
However, while there is a risk that a malicious organization might
take over a supplier, there is a certainty that unintentional
vulnerabilities are included in components - so for most people, automatic
updates are the safer course.
That risk tradeoff is not true for everyone, but as long as people can
intentionally opt out of the updates, they will not be forced to use it.
- Escrow software.
All such devices' software source code must be provided in escrow to some
independent third party, and it will
be released as open source software (or simply loses all copyright and
patent protections) if the company goes out of business,
fails to honor returns, or otherwise refuses to support the software
as required.
This helps counter a common loophole: some companies sell products, make money,
and then immediately go out of business - to be replaced by another company
created by the same people and doing the same things.
If an organization won't support the software, then the software should be
provided to others so that they can support it, and the company will suddenly
lose all the exclusive rights to it.
This isn't a perfect countermeasure, but when customers are abandoned they
should be provided some mechanism that enables self-support.
The purpose of copyright and patent law in the US is, per the Constitution, to
"promote the progress of science and useful arts" - if copyright and
patents prevent progress, then they must be voided.
- No unencrypted services.
Any internet-connectable device must not, by default, provide an
unencrypted service (like telnet or unencrypted HTTP) to other systems.
Sometimes companies may need to provide such a service, but at least make
sure they have to enable it - not have it on by default.
- Eliminate unencrypted Internet communication.
All communication over the Internet must be cryptographically
authenticated and encrypted within 10 years after passing this law.
Rationale:
This would counter many subversions and mass surveillance.
This one is extremely challenging, so we need to provide a significant
amount of time so people can implement it.
This may be too harsh, or maybe the time period needs to be longer -
but let's talk about it.
A service is revenue-producing if it accepts money from any source, including
users, customers, students, and advertizers.
So yes, Facebook is revenue-producing.
I'm sure there are more potential laws.
I focus on passwords because that's one area where the problem is especially
obvious.
Someday we may get rid of passwords, but today doesn't look like that day,
and tomorrow isn't looking good either.
We need to make sure that laws don't inhibit experimentation/innovation.
They need to support the use and modification of open source software, too.
But it seems to me that by careful focusing on the main problems, we can
avoid that.
Laws can be useless, or create problems worse than they solve.
The solution isn't no laws; the solution is work together, honestly,
to create laws that are clear and focus specifically on the key problems.
Laws cannot by themselves solve the problem, of course.
"Fixing the
IoT isn't going to be easy" by Matthew Garrett (2016-10-21)
explains why a simple one-sentence law or two will not be enough.
Obviously criminals will typically not obey the law (surprise!).
But while I agree with Garrett that this will not be easy, I
think some of the problems he raises (e.g., companies going out of business
to avoid support costs) can be countered.
In the longer term there will need to be international agreements -
but I think it's better to try to hammer out local laws, to make sure they
work, before we try to make them international laws.
We can use laws, as well as technology, to make attacks less dangerous...
and that seems like an approach worth taking.
Feel free to see my home page at
https://dwheeler.com.
You may also want to look at my paper
Why OSS/FS? Look at
the Numbers! and my book on
how to develop
secure programs.
(C) Copyright 2015 David A. Wheeler.
Released under
Creative Commons
Attribution-ShareAlike version 3.0 or later
(CC-BY-SA-3.0+).
As with everything else on my personal site,
this is not endorsed by my employer, government, or
guinea pig.
I do hope that it'll be a seed for some improvements, though.