David A. Wheeler's Blog

Sat, 25 May 2019

GitHub Maintainer Security Advisories

GitHub just made a change that I think will make a big improvement to the security of open source software (OSS). It’s now possible to privately report vulnerabilities to OSS projects on GitHub via maintainer security advisories! This wasn’t possible before, and you can blame me (in part), because I’m the one who got this ball rolling. I also want to give a big congrats to the GitHub team, who actually made it happen.

Here some details, in case you’re curious.

As you probably know, there are more OSS projects on GitHub than any other hosting service. However, there has been no way to privately report security vulnerabilities on OSS projects. It’s hard to fault GitHub too much (they’re providing a service for free!), yet because so much software is maintained on GitHub this has led to widespread problems in reporting and handling vulnerabilities. It can be worked around, but this has been a long-standing systemic problem with GitHub.

Why is this a problem? In a word: attackers. Ideally software would have no defects, including vulnerabilities. Since vulnerabilities can harm users, developers should certainly be using a variety of techniques to limit the number and impact of vulnerabilities in the software they develop If you’re developing OSS, a great way to see if you’re doing that (and show others the same) is to get a CII Best Practices badge from the Linux Foundation’s Core Infrastructure Initiative (I lead this effort). But mistakes sometimes happen, no matter what you do, so you need to be prepared for them. It’s hard to respond to vulnerability reports if it’s hard to get the vulnerability reports or discuss them within a project. Of course, a project needs to rapidly fix a vulnerability once it is reported, but we need to make that first step easy.

In September 2018 I went to a meeting at Harvard to discuss OSS security (in support of the Linux Foundation). There I met Devon Zuegel, who was helping Microsoft with their recently-announced acquisition of GitHub. I explained the problem to her, and she agreed that this was a problem that needed to be fixed. She shared it with Nat Friedman (who was expected to become the GitHub CEO), who also agreed that it made sense. They couldn’t do anything until after the acquisition was complete, but they planned to make that change once the acquisition was complete. The acquisition did complete, so the obvious question is, did they make the change? Well…

I am very happy to report that GitHub has just announced the beta release of maintainer security advisories, which allow people to privately report vulnerabilities without immediately alerting every attacker out there. My sincere thanks to Devon Zuegel, Nat Friedman, and the entire team of developers at GitHub for making this happen.

This seems to be part of a larger effort by GitHub to support security (including for OSS). GitHub’s security alerts make it easy for GitHub-hosted projects to learn about vulnerable dependencies (that is, a version of a software component that you depend on but is vulnerable).

It’s easy to get discouraged about software security, because the vulnerabilities keep happening. Part of the problem is that most software developers know very little about developing secure software. After all, almost no one is teaching them how to do it (I teach a graduate class at George Mason University to try to counter that problem). I hope that over time more developers will learn how to do it. I also hope that more and more developers will use more and more tools will help them create secure software, such as my flawfinder and Railroader tools. Tools can’t replace knowledge, but they are a necessary piece of the puzzle; putting tools into a CI/CD pipeline (and an auditing process if you can afford one) can eliminate a vast number of problems.

These changes show that it is possible to make systemic changes to improve security. Let’s keep at it!

path: /oss | Current Weblog | permanent link to this entry

Fri, 10 May 2019

The year of Linux on the desktop

For those who know their computer history, wild things are going on regarding Linux this year.

Linux is already in widespread use. For years the vast majority of smartphones run Android, and Android runs on Linux, so most smartphones run on Linux. As of November 2018 100% of all top 500 supercomputers worldwide run on Linux. Best estimates for servers using Linux are around 66.7%, and Linux is widely used in the cloud and in embedded devices.

But something different is going on in 2019. All Chromebooks are also going to be Linux laptops going forward. Later this year Microsoft will include the Linux kernel as a component in Windows. In a sense, 2019 is the year of the Linux desktop. This was not in the way it was envisioned in the past, but perhaps that’s what makes it most interesting. No, it does not mean that everyone is interacting directly with Linux as their main laptop OS, and so you can certainly argue that this doesn’t count. But increasingly that is measurement is less important; people today access computers via browsers, not the underlying OS, and that system is often running and/or developed using Linux.

path: /oss | Current Weblog | permanent link to this entry

Wed, 10 Apr 2019

Subversion of bootstrap-sass

A malicious backdoor has been found in the popular open source software library bootstrap-sass. Its impact was limited - but the next attack might not be. Thankfully, there are things we can learn and do to reduce those risks… but that requires people to think them through.

See my essay Subversion of boostrap-sass for more about that!

path: /oss | Current Weblog | permanent link to this entry

Tue, 26 Mar 2019

Assurance cases

No one thing creates secure software, so you need to do a set of things to make adequately secure software. But no one has infinite resources; how can you have confidence that you are doing the right set? Many experts (including me) have recommended creating an assurance case to connect the various approaches together to an efficient, cohesive whole. It can be hard to start an assurance case, though, because there are few public examples.

So I am pleased to report that you can now freely get my paper A Sample Security Assurance Case Pattern by David A. Wheeler, December 2018. This paper discusses how to create secure software by applying an assurance case, and uses the Badge Application’s assurance case as an example. If you are trying to create a secure application, I hope you will find it useful.

path: /security | Current Weblog | permanent link to this entry

Sat, 02 Mar 2019

Don’t Use ISO/IEC 14977 Extended Backus-Naur Form (EBNF)

Sometimes people want to do something, find a standard, and do not realize the downsides of using that standard. I have an essay in that genre titled Don’t Use ISO/IEC 14977 Extended Backus-Naur Form (EBNF). The problem is that although there is a ISO/IEC 14977:1996 specification, in most cases you should not use it. If you have to write a specification for a programming language or complex data structure, please take a look at why I think that!

path: /misc | Current Weblog | permanent link to this entry

Sat, 09 Feb 2019

Railroader: Security static analysis tool for Ruby on Rails (Brakeman fork)

I’ve kicked off the Railroader project to maintain a security static analysis tool for Ruby on Rails that is open source software. If you are developing with Ruby on Rails, please consider using Railroader. We would also really love contributions, so please contribute!

A security static analysis tool (analyzer) examines software to help you identify vulnerabilities (without running the possibly-vulnerable program). This helps you find and fix vulnerabilities before you field your web application. Ruby on Rails is a popular framework for developing web applications; sites that use Rails include GitHub, Airbnb, Bloomberg, Soundcloud, Groupon, Indiegogo, Kickstarter, Scribd, MyFitnessPal, Shopify, Urban Dictionary, Twitch.tv, GitLab, and the Core Infrastructure Initiative (CII) Best Practices Badge.

In the past the obvious tool for this purpose was Brakeman. However, Brakeman has switched to the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Public License (CC-BY-NC-SA-4.0). This is not an open source software license since it cannot be used commercially (an OSS license cannot discriminate against a field of endeavor). Similarly, it is not a free software license (since you cannot run the program as you wish / for any purpose). You can verify this by looking at the Brakeman 4.4.0 release announcement, the SPDX license list, Debian’s “The Debian Free Software Guidelines (DFSG) and Software Licenses”, Various Licenses and Comments about Them (Free Software Foundation), and Fedora’s Licensing:Main (Bad Licenses list). Railroader conitinues using the original licenses: MIT for code and CC-BY-3.0 for the website. MIT, of course, is a very well-known and widely-used open source software license.

If you are currently using Brakeman, do not update to Brakeman version 4.4.0 or later until you first talk with your lawyer. At the very least, if you plan to use newer versions of Brakeman, check their new license carefully to make sure that there is no possibility of a legal issue. This license change was part of a purchase of Brakeman by Synopsys. Synopsys is a big company, and they definitely have the resources to sue people who don’t obey their legal terms. Even if they didn’t, it is not okay to use software when you don’t have the right to do so. Either make sure that you have no legal issues… or just switch to Railroader, where nothing has changed.

Unfortunately, it is really easy to “just upgrade to the latest release” of Brakeman without realizing that this is a major license change. I suspect a lot of people will just automatically download and run the latest version, and have no idea that this is happening. I only noticed because I routinely use software license checkers (license_finder in my case) so that I immediately notice license changes in a newer version. I strongly recommend adding static source code analyzers and license checkers as part of your continuous integration (CI).

We assume that “Brakeman” is now a trademarked by Synopsys, Inc, so we’ve tried to rename everything so that the projects are clearly distinct. If we’ve missed something, please let us know and we’ll fix it. The term “Railroader” is a play on the word Rails, but it is obviously a completely different word. Railroader shares a common code base historically with Brakeman, and that’s important to explain, but they are not the same projects and we are expressly trying to not infringe on any Brakeman trademark. It’s obviously legal to copy and modify materials licensed under the MIT and CC-BY-3.0 licenses (that’s the purpose of these licenses), so we believe there is no legal problem.

I think I have a reasonable background for starting this project. I created and maintain flawfinder, a security static analysis tool for C/C++, since 2001. I literally wrote the book on developing secure software; see my book Secure Programming HOWTO. I even teach a graduate class at George Mason Univerity (GMU) on how to develop secure software. For an example of how I approach securing software in an affordable way, see my video How to Develop Secure Applications: The BadgeApp Example (2017-09-18) or the related document BadgeApp Security: Its Assurance Case. I have also long analyzed software licenses, e.g., see The Free-Libre / Open Source Software (FLOSS) License Slide, Free-Libre / Open Source Software (FLOSS) is Commercial Software, and Publicly Releasing Open Source Software Developed for the U.S. Government.

While Railroader is a project fork, we hope that this is not a hosttile fork. We will not accept software licensed only under CC-BY-NC-SA-4.0, since that is not an OSS license. But we’ll gladly accept good contributions from anyone if they are released under the original OSS licenses (MIT for software, CC-BY-3.0 for website content). If the Brakeman project wants to cooperate in some way, we’d love to talk! We are all united in our desire to squash out vulnerabilities before they are deployed. In addition, we’re grateful for all the work that the Brakeman community has done.

So, again: If you are developing with Ruby on Rails, please consider using Railroader. We would also really love contributions, so please contribute!

path: /oss | Current Weblog | permanent link to this entry

Mon, 19 Nov 2018

Get your CII best practices badge!

Are you developing open source software (OSS)? Selecting some? If you’re developing OSS, earn a best practices badge from the Linux Foundation Core Infrastructure Initiative (CII). If you’re selecting OSS, prefer OSS that has earned a badge. The badge shows that the project is applying the best practices for today’s projects. Check out this short Youtube video summary or the CII Best Practices badge website.

path: /oss | Current Weblog | permanent link to this entry

Mon, 29 Oct 2018

The allsome quantifier

For over 100 years formal logic has routinely represented ideas like “all Xs are Ys” using something called the “for all quantifier”, abbreviated as ∀. This lets people mathematically represent statements like “All Martians are green”. This is really important today, because these mathematical statements can be used to determine if systems work correctly; in some cases this can be used to save lives.

However, there is a problem. It is easy to mistranslate informal statements into formal logic, and these errors can cause serious problems. For example, in formal logic, the normal way to represent the statements “All Martians are green” and “All Martians are not green” can be simultaneously true (namely, when there are no Martians).

Informal statements with this format often embed the assumption that the situation occurs (that is, that there is at least one Martian). This mismatch between formal logic and informal statements could lead to problems, and conceivably to deaths.

I propose a small solution - a new formal logic quantifier called “allsome” (aka “all some”) that is designed to make some of these mistranslations less likely. The allsome quantifier, abbreviated ∀!, simultaneously expresses for all (∀) and there exists (∃) in a way that models many informal statements. I hope that this new quantifier will reduce the risk of mistranslations of informal statements into formal expressions, and that others will eventually agree that allsome is awesome.

For more information, see The allsome quantifier.

path: /misc | Current Weblog | permanent link to this entry

Tue, 23 Oct 2018

Do not install or develop mobile apps unless you have to

If you are thinking about installing another mobile app on your smartphone, or thinking about developing a mobile app, I have a simple recommendation: Don’t do it unless you must do it to get the capability you want. In many cases, using or developing just a web application is the better choice.

First, if you install a mobile application, that mobile application often has a lot of access to information that you probably don’t want it to have. What’s more, many of the organizations developing mobile apps have a business model that involves subtly getting as much personal data about you as possible, and selling that to others. Smartphone operating systems have security mechanisms that try to reduce the impact of these problems, by providing some control over what the application can access, and that’s a good thing. However, it’s hard to stem the tide because smartphone operating systems do have direct access to a lot of data about you. Smartphone operating systems have direct access to your location, fixed personal identifiers like your cell phone number, and often have direct access to information such as your list of contacts. There are cases where it needs to be shared, so in the name of “making things easy” these mobile applications often end up with far more privileges than they should have.

In contrast, web browsers have long had to counter web applications that try to extract data from you. They certainly do not prevent all problems, but they are designed so that they do not give away location, cell phone numbers, or your contact list so easily. Many services are perfectly workable through web browsers instead of web applications, at least for typical uses. This includes sites such as Youtube, Facebook, and Twitter. In addition, once you do not install those apps, you will have room for applications that really do need to be a mobile app… and thus will not need to upgrade your smartphone as often.

And if you’re thinking about developing a mobile app: Don’t do it, at least without seeing if there’s a viable alternative. For many situations, creating mobile applications is a huge waste of money. The United Kingdom essentially banned the development of mobile applications, noting that mobile applications are “very expensive to produce, and they’re very very expensive to maintain because you have to keep updating them when there are software changes.” A related problem is that if you develop mobile apps you typically have to write at least three versions of the software: an iOS mobile app, an Android mobile app, and a web application so people can use large screens. If you’re a commercial organization, a mobile application can not only be costly to develop (in part because you will have to develop the application multiple times), but in practice you’ll have to pay a large cut of your revenue to Google and Apple (typically 15% of your revenue). Are you really sure you want to spend or lose that much money?

What’s more, web applications can increasingly be used instead of mobile/native applications, even in cases where it was impossible at one time. I should first note that I’m using an expansive definition of “web application” here - I just mean anything you can access using a web browser. Historically, the main reason you needed to create a mobile or native application was because you needed the application to work offline. However, service workers now make it practical to create many offline applications and are widely available. Internet Explorer (IE) is the only major browser that does not support service workers, but people who have IE can easily use or install a modern web browser instead (such as Firefox, Chrome, or Edge). There are other ways to develop offline web applications too. Another reason to create mobile or native applications was speed, but recent JavaScript optimization work has made web applications much faster. In addition, WebAssembly makes it possible to create some applications that run far faster and is supported by Chrome, Firefox, Safari, and Edge. When developing web applications to handle all devices you need to use responsive web design, but that is already good practice and widely supported. Since April 2015 Google’s search engine has penalized web sites that are not mobile-friendly; this really is not anything new. Users can easily bookmark your web application to get there later, too.

There are many advantages of just developing a web application. There are a huge number of tools already developed to help you develop web applications, for one thing. Many of those tools are open source software and no-cost. A large number of people already know how to develop web applications, too. Perhaps most importantly, web applications are standards-based; you are not locked into any one organization’s ecosystem.

Most obviously: If you are already going to develop a web application, maybe you don’t need to write two more versions of the same software and try to maintain them too. There are tools that let you try to write software that ports between iOS and Android, but software you do not need to write at all is the easiest to maintain.

Do you need to create a vanity mobile app for your conference so that attendees can see what talks are where? Often the answer is no. Do you need a mobile app so that people can fill in a simple form (such as for government services)? Again, the answer is often no.

Of course, there are perfectly valid reasons to create a mobile application (or other kind of native application) for end-users, and that means there are good reasons to install a mobile app. Some applications require access to specialized device services that are not accessible from a web browser, or have speed requirements beyond what a web browser can currently provide. In addition, some apps are older and are not likely to be rewritten. But today I find that many people are not thinking about the alternatives, and ignoring alternatives is a mistake. Before installing or creating (multiple) mobile applications, ask yourself, do I need to?. If you don’t need to do that, and can use or create a web application instead, you might be able to save yourself a lot of trouble.

path: /misc | Current Weblog | permanent link to this entry

Wed, 26 Sep 2018

Removing www prefix on this website due to Chrome nonsense

The Chrome web browser is starting to lie to users about the URL you are viewing, for example, by removing “www.” if it is a prefix. I do not like this, because I think it is dangerous to tell people incorrect information. I’ve had a website for a very long time, and when I started it, the recommended practice was to use “www.” prefixes for websites… so that is what I have been doing. Since I cannot prevent Chrome from lying, and I want people to accurately see name of the website they are looking at, I’m going to switch this site’s standard name to be “dwheeler.com” instead of “www.dwheeler.com”. That will not fix the problem for other sites, but it will fix it for this one. I will set up a redirection and slowly switch some links to the new name.

path: /website | Current Weblog | permanent link to this entry

Thu, 16 Aug 2018

Verified voting still necessary, paperless voting still untrustworthy

In 2006 I wrote “Direct Recording Electronic (DRE) Voting: Why Your Vote Doesn’t Matter”. Over a decade later, voting systems are still being used that are fundamentally insecure - though things are better in some places.

First, the basics. If a voting system uses anything other than voter-verified paper to vote, then that voting system is not secure. Paper does not automatically make a voting system secure, but a system that does not use voter-verified paper cannot be secure. Verified voting using paper ballots is a minimum requirement for a trustworthy voting system. Direct recording equipment (DRE) and mobile phone voting systems cannot be adequately secure for elections of government positions. These insecure systems are simply invitations for vote tampering.

The article “Why US elections remain ‘dangerously vulnerable’ to cyber-attacks” discusses some of the reasons why many of the US voting systems are fundamentally untrustworthy. One quote: “Georgia’s election officials continue to defend the state’s electronic voting system that is demonstrably unreliable and insecure, and have repeatedly refused to take administrative, regulatory or legislative action to address the election security failures.” Another quote: “there is little mystery about the safest available voting technology - optically scanned paper ballots, now used by about 80% of US voters. Some of the states that don’t have this technology, like Louisiana, would like it but don’t have the funds to switch. Others, like Georgia and South Carolina, simply aren’t interested in ditching their all-electronic systems despite the compelling reasons to do so.”

“West Virginia to introduce mobile phone voting for midterm elections” by Donie O’Sullivan discusses West Virginia’s introduction of mobile phone voting. Does this require a paper ballot? No. Therefore, West Virginia’s proposed voting system is horrifically insecure, and its results will be completely untrustworthy if implemented.

XKCD’s “Voting Software” is a funny summary. In short: experts on computer security agree that computers must not be directly used for voting when there are important stakes (such as a vote for a political office). When experts say “you cannot adequately trust the systems we build” you should believe the experts.

As I noted earlier, “I used to do magic tricks, and all magic tricks work the same way - misdirect the viewer, so that what they think they see is not the same as reality. Many magic tricks depend on rigged props, where what you see is NOT the whole story. DREs are the ultimate illusion - the naive think they know what’s happening, but in fact they have no way to know what’s really going on.”

I am sure that some election officials will bristle when told that we cannot trust the legitimacy of their results. Too bad. If your election system uses technology that is widely known to be easily subverted, such as voting machines that do not use voter-verified paper ballots, then your results should be viewed with deep suspicion. Without voter-verified paper ballots there is no way to independently verify vote counts, so there is no reason to trust the results. This is old information; those who have not replaced insecure systems are those who have failed to act. Some states certify or approve the use of voting machines without voter-verified paper ballots, but that just shows that their certification or approval processes fail to provide even a minimum level of security.

There is more to protecting the legitimacy of votes, of course. For example, it is critical to ensure that only eligible voters can vote, that voters can vote at most once, and that paper votes cannot be added or removed. But currently many districts are not doing the minimum necessary to have trustworthy election results, and we need to get systems up to minimal standards.

There is an old phrase: “It’s not the people who vote that count. It’s the people who count the votes.” Stalin did not say that exactly, but he did say something like it. The point is that if we do not adequately protect the process of counting votes, then the vote counts are vulnerable to manipulation.

The Voting system principles from Verified Voting provides a useful starting list of requirements; there are other guides too. Voting systems that fail to meet those principles are untrustworthy toys that should not be used for real elections. It is fine to use direct recording equipment, mobile phone voting, or other insecure systems when you are voting for homecoming queen or deciding where to go to lunch. But it is time to stop using fundamentally flawed voting systems like these for elections that matter.

path: /security | Current Weblog | permanent link to this entry

Tue, 24 Jul 2018

Email encryption is here! Use STARTTLS everywhere!

Historically most email has been unencrypted, and that has a serious flaw: unencrypted email can be read and modified by anyone between the sender and final receiver. Tools to do “end-to-end” encryption of email (to prevent reading and/or modifying it) have been available for decades, but they are often hard to use by “normal” users.

Thankfully, there’s been work to significantly improve email security. In particular, STARTTLS email encryption is now widely supported, and the Electronic Frontier Foundation “STARTTLS Everywhere” initiative is working to get everyone to support STARTTLS in their email systems. Therefore:

STARTTLS is not perfect, as I’ll discuss below. My point is that it’s way more secure than most email without it, because it improves security without requiring end-users to do anything. Below is additional information that I think you’ll find interesting.

First, here’s how STARTTLS works. Email is transmitted by a series of “hops”; if the hop recipient supports STARTTLS, email is automatically encrypted on that hop as it goes through the infrastructure, without requiring email users to do anything special. That ease-of-use is a big deal - users normally do whatever is the default, so if the default is secure, then users will normally do the secure thing.

Lots of organizations now support STARTTLS. Google reports that by 2018-07-24 90% of its incoming email, and 90% of outgoing email, was encrypted using STARTTLS (“Email encryption in transit”). Many email services support STARTTLS, including Gmail, Yahoo.com, Outlook.com, and runbox.com. (This includes the top email services.) Many other organizations support STARTTLS, including Google, Microsoft, Bank of America, The American Red Cross, The Salvation Army, The Software Engineering Institute (SEI), Carnegie Mellon University (CMU), and University of California, Berkeley. I give this list to show that there are many different kinds of organizations that support STARTTLS. The STARTTLS Policy List has an incomplete list of organizations known to be supporting STARTTLS.

The Electronic Frontier Foundation “STARTTLS Everywhere” initiative is an effort to get lagging organizations to support STARTTLS. As I noted earlier, you should use their tools to see if your organization properly supports STARTTLS on its incoming emails, and if not, complain to get that fixed.

There are some historical problems that the STARTTLS Everywhere project is working to fix:

STARTTLS is not an end-to-end encryption system. STARTTLS only encrypts while the email is being sent between systems (“hops”). That’s not all bad. For example, it means that receiving organizations can continue to examine the emails to check for viruses/malware, counter spam, and so on. But of course, there are downsides.

STARTTLS is, in general, not as strong as an end-to-end encryption system (from the point-of-view of providing confidentiality and integrity). For example, receiving organizations (and anyone who subverts their email system) can see and modify the email. Users who do not trust their email service providers should not depend on STARTTLS; they must use end-to-end encryption. In general, end-to-end encryption is stronger, so we should still work to make end-to-end email encryption easier to use and deploy. But for various reasons it’s hard to deploy end-to-end email encryption, and we’ve spent decades trying. Also, STARTTLS works just fine with end-to-end encryption.

Please indulge me: I think a small rant is appropriate here. There are some security specialists who think that only the perfect is acceptable. Nonsense! Requiring perfection is crazy. I think it is important, when creating and maintaining systems, to have an engineering mindset. In particular, you must always remember that that choices have trade-offs. It is not possible to have no risk; an asteroid might land on your head tomorrow. It is not reasonable to demand that systems be used regardless of their difficulty or expense; we all have limited time and money. Security issues are real, and we do need to address them, but time, money, and ease-of-use also matter greatly.

Unlike most other systems, STARTTLS is completely automatic (end-users don’t have to do anything) once it is set up, it is not hard to set up, and it counters a large class of attacks. For almost all users, email encryption with STARTTLS is a major improvement over what they had before. Let’s keep working to deploy even better systems, but let’s take partial victories where we can get them.

path: /security | Current Weblog | permanent link to this entry

Thu, 23 Nov 2017

FCC Votes against the People and Net Neutrality: Freedom is Slavery

To the surprise of no one, the US FCC led by Ajit Pai has finally issued the order to kill net neutrality.

In short, the FCC is voting to directly harm the US people and instead aid the monopolist Intenet Service Providers (ISPs). More information is on Tech Dirt.

This is inexcusable. Competition is often the best way to get good results, but for various reasons customers often cannot practically choose ISPs; in many cases they are essentially monopolies or duopolies. Where competition does not effectively exist, there must be regulation to prevent the monopolists from exploiting their customers, and the FCC has decided to expressly reject their duty to the people of the United States.

Orwell would be proud of the order’s name, “Restoring Internet Freedom”. Remember, Freedom is Slavery!

I’m sure we have not heard the end of this. This entire process was filled with fraud, with sock puppets proposing to end net neutrality while real people were ignored. All Americans need to make it clear to their representatives that Internet access is important, and that ISPs must be required to neutral carriers, instead of giving preferences to some sites or charging extra for some sites. I recommend voting against any representatives who fail to protect Internet access, as the FCC has failed to do.

path: /misc | Current Weblog | permanent link to this entry

Sat, 23 Sep 2017

Who decides when you need to update vulnerable software? (Equifax)

I have a trick question: Who decides when you need to update vulnerable software (presuming that if it’s unpatched it might lead to bad consequences)? In a company, is that the information technology (IT) department? The chief information officer (CIO)? A policy? The user of the computer? At home, is it the user of the computer? Perhaps the family’s “tech support” person?

Remember, it’s a trick question. What’s the answer? The answer is…

The attacker decides.

The attacker is the person who decides when you get attacked, and how. Not the computer user. Not a document. Not support. Not an executive. The attacker decides. And that means the attacker decides when you need to update your vulnerable software. If that statement makes you uncomfortable, then you need to change your thinking. This is reality.

So let’s look at Equifax, and see what we can learn from it.

Let’s start with the first revelation in 2017: A security vulnerability in Apache Struts (a widely-used software component) was fixed in March 2017, but Equifax failed to update it for two whole months, leading to the loss of sensitive information on about 143 million US consumers. The update was available for free, for two months, and it was well-known that attackers were exploiting this vulnerability in other organizations. Can we excuse Equifax? Is it “too hard” to update vulnerable software (aka “patch”) in a timely way? Is it acceptable that organizations fail to update vulnerable components when those vulnerabilities allow unauthorized access to lots of sensitive high-value data?

Nonsense. Equifax may choose to fail to update known vulnerable components. Clearly it did so! But Equifax needed to update rapidly, because the need to update was decided by the attackers, not by Equifax. In fact, two months is an absurdly long time, because again, the timeframe is determined by the attacker.

Now it’s true that if you don’t plan to rapidly update, it’s hard to update. Too bad. Nobody cares. Vulnerabilities are routinely found in software components, and have been for decades. Since it is 100% predictable that there will be vulnerabilities found in the software you use (including third-party software components you reuse), you need to plan ahead. I don’t know when it will rain, but I know it will, so I plan ahead by paying for a roof and buying umbrellas. When something is certain to happen, you need to plan for it. For example, make sure you rapidly learn about vulnerabilities in third party software you depend on, and that you have a process in place (with tools and automated testing) so that you can update and ship in minutes, not months. Days, not decades.

The Apache Struts Statement on Equifax Security Breach has some great points about how to properly handle reused software components (no matter where it’s from). The Apache Struts team notes that you should (1) understand the software you use, (2) establish a rapid update process, (3) remember that all complex software has flaws, (4) establish security layers, and (5) establish monitoring. Their statement has more details, in particular for #2 they say, “establish a process to quickly roll out a security fix release… [when reused software] needs to be updated for security reasons. Best is to think in terms of hours or a few days, not weeks or months.”

Many militaries refer to the “OODA loop”, which is the decision cycle of observe, orient, decide, and act. The idea was developed by military strategist and United States Air Force Colonel John Boyd. Boyd noted that, “In order to win, we should operate at a faster tempo or rhythm than our adversaries…”. Of course, if you want to lose, then you simply need to operate more slowly than your adversary. You need to get comfortable with this adversarial terminology, because if you’re running a computer system today, you are in an adversarial situation, and the attackers are your adversaries.

In short, you must update your software when vulnerabilities are found before attackers can exploit them (if they can be exploited). If you can’t do that, then you need to change how you manage your software so can do that. Again, the attacker decides how fast you need to react.

We’re only beginnning to learn about the Equifax disaster of 2017, but it’s clear that Equifax “security” is just one failure after another. The more we learn, the worse it gets. Here are some of the information we have so far. Equifax used the rediculous pair Username “admin”, password “admin” for a database with personal employee information. Security Now! #628 showed that Equifax recommended using Netscape Navigator in their website discussion on security, a rediculously obsolete suggestion (Netscape shut down in 2003, 14 years ago). Equifax provided customers with PINs that were simply the date and time, making the PINs predictable and thus insecure. Equifax set up a “checker” site which makes false statements: “In what is an unconscionable move by the credit report company, the checker site, hosted by Equifax product TrustID, seems to be telling people at random they may have been affected by the data breach… It’s clear Equifax’s goal isn’t to protect the consumer or bring them vital information. It’s to get you to sign up for its revenue-generating product TrustID… [and] TrustID’s Terms of Service [say] that anyone signing up for the product is barred from suing the company after.” Equifax’s credit report monitoring site was found to be vulnerable to hacking (specifically, an XSS vulnerability that was quickly found by others). Equifax failed to use its own domain name for all its sites (as is standard), making it easy for others to spoof them. Indeed, NPR reported that that ”After Massive Data Breach, Equifax Directed Customers To Fake Site”. There are now suggestions that there were break-ins even earlier which Equifax never detected. In short: The more we learn, the worse it gets.

Most obviously, Equifax failed to responsibly update a known vulnerable component in a timely way. Updating software doesn’t matter when there’s no valuable information, but in this case extremely sensitive personal data was involved. This was especially sensitive data, Equifax was using a component version with a publicly-known vulnerability, and it was known that attackers were exploiting that vulnerability. It was completely foreseeable that attackers would use this vulnerable component to extract sensitive data. In short, Equifax had a duty of care that they failed to perform. Sometimes attackers perform an unprecedented kind of sneaky attack, and get around a host of prudent defenses; that would be different. But there is no excuse for failing to promptly respond when you know that a component is vulnerable. That is negligence.

But how can you quickly update software components? Does this require magic? Not at all, it just requires accepting that this will happen and so you must be ready. This is not an unpredictable event; I may not know exactly when it will happen, but I can be certain that it will happen. Once you accept that it will happen, you can easily get ready for it. There are tools that can help you monitor when your components publicly report a vulnerability or security update, so that you quickly find out when you have a problem. Package managers let you rapidly download, review, and update a component. You need to have an automated checking system that uses a variety of static tools, automated test suites, and other dynamic tools so that you can be confident that the system (with updated component) works correctly. You need to be confident that you can ship to production immediately with acceptable risk after you’ve updated your component and run your automated checking system. If you’re not confident, then your checking system is unacceptable and needs to be fixed. You also need to quickly ship that to production (and this must be automated), because again, you have to address vulnerabilities faster than the attacker.

Of course, your risks go down much further if you think about security the whole time you’re developing software. For example, you can design your system so that a defect is (1) less likely to lead to a system vulnerability or (2) has less of an impact. When you do that, then a component vulnerability will often not lead to a system vulnerability anyway. A single vulnerability in a front-end component should not have allowed such a disastrous outcome in the first place, since this was especially sensitive data, so the Equifax design also appears to have been negligent. They also failed to detect the problem for a long time; you should be monitoring high-value systems, to help reduce the impact of a vulnerability. The failure to notice this is also hard to justify. Developing secure software is quite possible, and you don’t need to break the bank to do it. It’s impossible in the real world to be perfect, but it’s very possible to be adequately secure.

Sadly, very few software developers know how to develop secure software. So I’ve created a video that’s on YouTube that should help: “How to Develop Secure Applications: The BadgeApp Example” (by David A. Wheeler). This walks through a real-world program (BadgeApp) as an example, to show approaches for developing far more secure software. If you’re involved in software development in any way, I encourage you to take a look at that video. Your software will almost certainly look different, but if you think about security throughout development, the results will almost certainly be much better. Perfection is impossible, but you can manage your risks, that is, reduce the probability and impact of attacks. There are a wide variety of countermeasures that can often prevent attacks, and they work well when combined with monitoring and response mechanisms for the relatively few attacks that get through.

The contrast between Equifax and BadgeApp is stark. Full disclosure: I am the technical lead of the BadgeApp project… but it is clear we did a better job than Equifax. Earlier this week a vulnerability was announced in one of the components (nokogiri) that is used by the BadgeApp. This vulnerability was announced on ruby-advisory-db, a database of vulnerable Ruby gems (software library components) used to report to users about component vulnerabilities. Within two hours of that announcement the BadgeApp project had downloaded the security update, run the BadgeApp application through a variety of tools and its automated test suite (with 100% statement coverage) to make sure everything was okay, and pushed the fixed version to the production site. The BadgeApp application is a simpler program, sure, but it also manages much less sensitive data than Equifax’s systems. We should expect Equifax to do at least as well, because they handle much more sensitive data. Instead, Equifax failed to update reused components with known vulnerabilities in a timely fashion.

Remember, the attacker decides.

The attacker decides how fast you need to react, what you need to defend against, and what you need to counter. More generally, the attacker decides how much you need to do to counter attacks. You do not get to decide what the attacker will choose to do. But you can plan ahead to make your software secure.

path: /security | Current Weblog | permanent link to this entry

Tue, 25 Oct 2016

Creating Laws for Computer Security

In 2016 the website KrebsonSecurity was taken down by a large distributed denial-of-service (DDoS) attack. More recently, many large sites became inaccessible due to a massive DDoS attack (see, e.g., “Hackers Used New Weapons to Disrupt Major Websites Across U.S.” by Nicole Perlroth, Oct. 21, 2016, NY Times).

Sadly, the “Internet of Things” is really the “Internet of painfully insecure things”. This is fundamentally an externalities problem (the buyers and sellers are not actually bearing the full cost of the exchange), and in these cases mechanisms like law and regulation are often used.

So, what laws or regulations should be created to improve computer security? Are there any? Obviously there are risks to creating laws and regulations. These need to be targeted at countering widespread problems, without interfering with experimentation, without hindering free expression or the development of open source software, and so on. It’s easy to create bad laws and regulations - but I believe it is possible to create good laws and regulations that will help.

My article Creating Laws for Computer Security lists some potential items that could be turned into laws that I think could help computer security. No doubt some could be improved, and there are probably things I’ve missed. But I think it’s important that people start discussing how to create narrowly-tailored laws that counter the more serious problems without causing too many negative side-effects. Enjoy!

path: /security | Current Weblog | permanent link to this entry

Wed, 04 May 2016

Get your CII best practices badge!

If you’re involved in a free / libre / open source software (FLOSS) project, go to bestpractices.coreinfrastructure.org and get your best practices badge!

The Linux Foundation’s Core Infrastructure Initiative (CII) has just announced its CII best practices badging program for FLOSS projects. It’s a free program that lets developers explain how they follow best practices, and if they do, they can get a badge that they can show on their GitHub page or anywhere else. Early badge earners include the Linux kernel, Curl, GitLab, OpenBlox, OpenSSL, Node.js and Zephyr.

The idea is straightforward. The Heartbleed vulnerability in OpenSSL made it obvious that there are widely-accepted best practices that not everyone is doing - and that even includes important projects. This isn’t just speculation; if you compare OpenSSL before Heartbleed with current OpenSSL the difference is striking. I think it’s clear that if more projects would apply generally-accepted best practices, we’d have more secure software. This badging process helps projects identify those best practices, determine if they meet them, and show everyone else that they’re meeting them.

The web application and criteria are being maintained as an open source software project, so we’d love to have you! I say “we” because I’m leading this project… but it’s not just me, and we would love to have you involved.

More detail is in the Linux Foundation press release about the best practices badging project.

path: /oss | Current Weblog | permanent link to this entry

Thu, 10 Mar 2016

US government - Reusable and Open Source Software

The US White House has announced (in its blog) Leveraging American Ingenuity through Reusable and Open Source Software. They state that, “Today, we’re releasing for public comment a draft policy to support improved access to custom software code developed for the Federal Government.” They are accepting comments on this draft policy via GitHub pull requests, GitHub issues, or email. I definitely plan to take a look, and I’m sure they would like feedback from many people.

Note that I also posted this information on Twitter.

path: /oss | Current Weblog | permanent link to this entry

Mon, 01 Feb 2016

Using open source software to help technology transition of research

If you’re doing software research and development (especially on how to improve computer security), and are thinking about using an open source software (OSS) approach but don’t know a lot about it, here’s something that may help: Using an Open Source Software Approach for Cybersecurity Technology Transition (IDA paper P-5279, aka the “PI guide”). If you’re an old hand at developing Free/ libre/ open source software (FLOSS or OSS), you probably know most of this information. However, I’ve found that a lot of people could use a hand. Here’s that helping hand.

path: /oss | Current Weblog | permanent link to this entry

Address Sanitizer on an entire Linux distribution!

Big news in computer security: Hanno Boeck has recently managed to get Address Sanitizer running on an entire Linux distribution (Gentoo) as an experimental edition. For those who don’t know, Address Sanitizer is an amazing compile-time option that detects a huge range of memory errors in memory-unsafe languages (in particular C and C++). These kinds of errors often lead to disastrous security vulnerabilities, such as Heartbleed.

This kind of distribution option is absolutely not for everyone. Address Sanitizer on average increases processing time by about 73%, and memory usage by 340%. What’s more, this work is currently very experimental, and you have to disable some other security mechanisms to make it work. That said, this effort has already borne a lot of valuable fruit. Turning on these mechanisms across an entire Linux distribution has revealed a large number of memory errors that are getting fixed. I can easily imagine this being directly useful in the future, too. Computers are very fast and have lots of memory, even when compared to computers of just a few years earlier. There are definitely situations where it’s okay to effectively halve performance and reduce useful memory, and in exchange, significantly increase the system’s resistance to novel attack. My congrats!!

path: /security | Current Weblog | permanent link to this entry

Mon, 23 Nov 2015

Ransomware coming to medical devices?

Forrester Research has an interesting cybersecurity prediction for 2016: We’ll see ransomware for a medical device or wearable.

This is, unfortunately, plausible. I don’t know if it will happen in 2016, but it’s pretty reasonable. Indeed, I can see threats.. even if we can’t be sure that the ransomware is even installed.

After all, Dick Cheney had his pacemaker’s Wifi disabled because of this concern (see also here). People have already noted that terrorists might use this, since medical devices are often poorly secured. The additional observation is that may be a better way to (criminally) make money. We already have ransomware, including organizations who are getting better at extorting with it. Traditional ransomware is foiled by good backups; in this case backups won’t help, and victims will (understandably) be willing to pay much, much more. And I think that medical devices are actually a softer target.

With luck, this won’t come true in 2016. The question is, is that because it doesn’t show up until 2017 or 2018… or because the first ones were in 2015? DHS is funding work in this area, and that’s good… but while research can help, the real problem is that we have too many software developers who do not have a clue how to develop secure software… and too many people (software developers or not) who think that’s acceptable.

In short, we still have way too many people building safety-critical devices who don’t understand that security is necessary for safety. I hope that this changes - and quickly.

path: /security | Current Weblog | permanent link to this entry