A "Secure Programming" interview Today, we have the interview of David A. Wheeler. As you're going to see it in his interview, David will give a secure programming presentation during FOSDEM. You can discuss this interview in the corresponding forum
Raphaël Bauduin - Please, present yourself :-)
David A. Wheeler -
I'm an American, born in 1965,
and I've been developing software since about 1977.
Professionally, I'm always been interested in high-risk or
large software systems, including their security.
I've published two books on paper, one on software inspections
(published by the IEEE) and one on the Ada programming
language (published by Springer-Verlag).
I try to keep up with many different technologies; I know
dozens of compute
r languages, many operating systems,
security technologies, and so on.
My website at https://dwheeler.com
has more information about me, along with some of my projects and papers.
These include Why
OSS/FS? Look at the Numbers!, the
Secure Programming for Linux and Unix HOWTO,
flawfinder (a source code scanner that looks for potential security flaws),
More than a Gigabuck: Estimating GNU/Linux's Size, and
SLOCCount
(a program that counts source lines of code).
RB - You're the author of a book about secure Unix programming. Can you tell
a little more about it?
(like when you started writing it, who contributed, how many downloads,
how many translation, available as a printed book?....)
DAW - I first released the "Secure Programming in Linux and Unix HOWTO"
in November 1999, and I've been continuously adding
to it ever since. The document is entirely my own writing, though
many people have helped me by pointing out problems in the text
(in essence, the world is my editor).
I should hasten to add, though, that I didn't just think up these
guidelines on my own. This book is primarily a summary of
previously-known issues in writing secure programs.
The reason that I wrote the book was that I wanted software developers
to write software with better security, and I believe that one of the
main reasons developers write insecure software is because
they don't know how.
Although people in the security community
knew about the issues I've written about,
most software developers didn't know about them.
In many cases the information was only available in obscure
sources, or in locations dedicated to showing how to use
various weaknesses to break a program. Most programmers don't
need to know the details on how to exploit a weakness
(e.g., buffer overflows).. they just need to know how to
write programs that don't have those weaknesses.
My book aims to be a ``one-stop shop'' for programmers of
Linux or Unix systems.. it won't tell you how to exploit
weaknesses, but it will help you to not create one.
To be honest, I have no idea how many people have downloaded the book.
Just in October 2001, I had 2,746 hits on the front page at my site
(https://dwheeler.com/secure-programs), but some people
jump to particular chapters directly. Even more importantly,
there are many other ways to get the book that I can't account for,
and I believe most readers get the book that way.
Many people get the book from the Linux Documentation Project (LDP)
website, and the book is included in many GNU/Linux distributions
(including Red Hat's).
I've often considered publishing it on paper (I've done it before),
and may eventually do so.
RB - What's the motivation to have put it online under the GFDL?
DAW - My overall motivation is that I wanted people to write secure
programs for GNU/Linux and Unix, so I wanted to do the following:
- Make it easy for developers to get this information.
Making the document and freely distributable really supports this.
- Make it easy for developers to learn this information.
This primarily means that I need to write clearly, but by making
it easy for people to submit error reports, I can get rid of errors
or fix unclear areas.
- Make it easy to maintain the information (as new types of
weaknesses or solutions are found).
In particular, I wanted to make it possible for someone else to
maintain this information if I'm hit by a bus;
the GFDL makes it possible for someone else to pick up the work
and run with it.
RB - Do you think Free and Open Software is generally more secure than
proprietary software? Why? Can you give examples?
DAW - I think open source software / free software (OSS/FS) has
some advantages for security, but it's not as simple as
``OSS/FS is always more secure.''
My on-line book on writing secure software
includes a discussion about OSS/FS and security.
A program that was originally closed source and is later made OSS/FS
will probably start less secure for its users, because its vulnerabilities
will be even more exposed.
Over time, though, I think OSS/FS programs
have the potential to be much more secure than closed programs.
Just making a program open source doesn't suddenly make a program secure,
and making an open source program secure is not guaranteed.
If a program is developed from the outset as OSS/FS, then this time to
winnow out the vulnerabilities depends on whether or not there are
multiple developers looking over the code who know how to write secure
programs and are working to fix them.
Here are some preconditions for when OSS/FS will tend to be more secure:
- First, people have to actually review the code.
All sorts of factors can reduce the amount of review:
being a niche or rarely-used product (where there are few potential
reviewers), having few developers
(the reviewers with the most incentive tend to be people
trying to modify the program),
and the use of a rarely-used computer language.
The more developers there are looking over the code, generally the more
likely they are to see problems - this is often called the
``many eyeballs'' theory.
If the code isn't open source, or uses an unusually asymmetric license
like the NPL, that's also a disincentive -
people are less likely to voluntarily participate
if someone else will have rights to their results
that they don't have (as Bruce Perens says,
``who wants to be someone else's unpaid employee?'').
- Second, the people developing and reviewing the code
must know how to write secure programs.
Hopefully my book will help.
Clearly, it doesn't matter if
there are ``many eyeballs'' if none of the eyeballs know what to look for.
- Third, once found, these problems need to be fixed quickly
and their fixes distributed.
Open source systems tend to fix the problems quickly,
but the distribution is not always smooth.
If developers work to develop a secure program, using well-known
techniques and guidelines, then they'll produce a program that's
more secure than one that isn't - no matter if it's OSS/FS or not.
You have to know and apply the knowledge of how to develop
secure programs.
Unfortunately, in many markets there's little incentive for
proprietary developers to develop truly secure programs, and proprietary
developers don't have to undergo public scrutiny of their code as
OSS/FS developers do.
Ultimately, the proof is in the pudding.
I've collected
quantitative evidence showing that OSS/FS is often more secure in my
``Look at the Numbers'' paper.
For example,
Windows web sites are disproportionately defaced by more than their
market share would explain, and an insurance company charges 5-15% more
for ``hacker insurance'' on Windows systems.
There's a lot of quantitative data from CERT, Bugtraq, and others
that suggests there OSS/FS tends to be have better security.
RB - You wrote a document about Java Security. What do you think about the
security in Java? Is it easier to write a secure
program in Java than in C?
DAW - It's hard to claim that a particular language is in all ways
better than another one; it's easier to discuss their strengths
and weaknesses.
In one sense it's easier to write a secure program in just about
any language OTHER than the C family (C, C++, and Objective-C), because
the C family doesn't protect against buffer overflows.
Practically all other non-assembly computer languages protect
against buffer overflows.
The one nice thing about C is that people can easily understand
exactly what is happening; some languages make it difficult to
understand what the computer is doing, and sometimes those
lower-level details can be exploited.
Java has a number of interesting mechanisms so that users can run
Java programs they don't fully trust; in that sense Java is unusual
and has some nice security properties.
However, nowadays many Java programs are written to run on a server
(responding to clients) - in that environment, Java's unusual abilities
are basically irrelevant, and Java is no better (from a security
point-of-view) than most other languages.
The real problem with Java is that Sun has reneged on earlier promises
to help make Java a nonproprietary standard, even after people invested
billions of dollars based on Sun's now-broken promises.
Java's design is still completely controlled by Sun.
I have a dim view of proprietary languages; the industry finally
got away from most of those long ago and we really don't need
those problems again. Granted, Visual Basic's situation is even
worse, but it's a matter of degree - nearly all other widely-used
languages are either formally standardized and/or have an open source
implementation as their definition.
PHP currently has a horrifically bad default for security - by default
PHP ships with the configuration setting ``register_globals'' on.
This means that every variable's
initial value can be set by an attacker, and developers have to explicitly
reset variable values if they want to retain control over them.
This doesn't mean that you can't write secure programs
in PHP - it just means as PHP programs get large, it becomes increasingly
hard for mere mortals to make secure PHP programs.
This is unfortunate - PHP itself is fine in general, it's just this default
that's the problem. I'm glad to say that as of PHP version 4.1.0,
the PHP developers have made it much easier to write PHP programs
when register_globals is off. More importantly, they have announced that
PHP will change to setting ``register_globals'' to off in the future by
default. If you set register_globals off, PHP is a perfectly reasonable
programming language for secure programs.
Some languages make it easier to write secure programs
than others, so where it makes sense you should prefer tools that
make such errors less likely.
However, the language isn't the primary key;
you can write insecure code in any language, and you can write
secure code in just about any language (though in some cases it
may be harder).
To discuss the content of this interview, you can go this forum |