This is the main web site for flawfinder, a simple program that examines C/C++ source code and reports possible security weaknesses (“flaws”) sorted by risk level. It’s very useful for quickly finding and removing at least some potential security problems before a program is widely released to the public. It is free for anyone to use and is available as open source software (OSS). See “how does Flawfinder work?”, below, for more information on how it works. Others have had success with flawfinder; see testimonials and reviews/papers for more. You can skip ahead to documentation if you want more detail.
Flawfinder is specifically designed to be easy to install and use. You can install Python and use pip as follows:
pip install flawfinderAfter installing it, at a command line just type:
You can also use a pre-packaged version of flawfinder (it's widely available), or download and install it yourself. Flawfinder works on Unix-like systems (it’s been tested on GNU/Linux), and on Windows by using Cygwin. It requires Python 2.7 or Python 3 to run.
Please take a look at other static analysis tools for security, too. One reason I wrote flawfinder was to encourage using static analysis tools to find security vulnerabilities. No one tool solves all problems, but tools can be a very useful aid in developing secure software. It's generally better to use many tools, and the worst situation is to use no tools.
Below you can see badges, sample output, license, testimonials, documentation, using a pre-packaged version of flawfinder, downloading and installing, joining the flawfinder community, speed, how does Flawfinder Work?, reviewing patches, a fool with a tool is still a fool, hit density (hits/KSLOC), flawfinder and RATS, reviews/papers, and other static analysis tools for security.
Flawfinder is officially Common Weakness Enumeration (CWE)-compatible and has earned the CII Best Practices "passing" badge.
If you’re curious what the results look like, here are some sample outputs:
Flawfinder is released under the General Public License (GPL) version 2 or later, and thus is open source software (as defined by the Open Source Definition) and Free Software (as defined by the Free Software Foundation’s GNU project). Its SPDX license expression is "GPL-2.0+". Feel free to see Open Source Software / Free Software (OSS/FS) References or Why OSS/FS? Look at the Numbers! for more information about OSS/FS. You can use it to analyze any software; it does not need to be free / libre / open source software (FLOSS).
Others have found it useful. Here are few testimonials that it’s received over time:
Flawfinder comes with a simple manual describing how to use it. If you’re not sure you want to take the plunge to install the program, you can just look at the documentation first, which discusses how to use it and how it supports the CWE. The documentation is available in the following formats:
Many Unix-like systems have a package already available to them, including Fedora, Debian, and Ubuntu. Debian and Ubuntu users can install flawfinder using apt-get install flawfinder (as usual); Fedora users can use yum install flawfinder. Cygwin also includes flawfinder. It’s also available in many other distributions. Flawfinder is available via FreeBSD’s Ports system (see this FreeBSD ports query for flawfinder and flawfinder info for security-related ports). OpenBSD includes flawfinder in its “ports”. NetBSD users can simply use NetBSD’s pkgsrc to install flawfinder (my thanks to Thomas Klausner for doing the flawfinder NetBSD packaging). The Fink project, which packages FLOSS for Darwin and Mac OS X, has a Fink flawfinder package, so users of those systems may find that an easy way to get flawfinder.
If there’s no package available to you, or it’s old, you can download and install flawfinder directly. The current version of flawfinder is 2.0.15. If you want to see how it’s changed, view its ChangeLog. You can even go look at the flawfinder source code directly. We assume you have a Unix-like system such as a Linux-based system. If you use Windows, the recommended way is to install Cygwin first, and install it on top of Cygwin. However, it has been reported to work on Windows directly.
First, download it. You can get the current released version flawfinder in .tar.gz (tarball) format) here. You can also get flawfinder by visiting the SourceForge flawfinder project page, in particular its Files section. You definitely need to go to the SourceForge project site if you want to get on the mailing list, submit a bug report or feature request, or see/get the latest drafts.
On Unix-like systems, you install it in the usual manner. First, uncompress the file and become root to install:
tar xvzf flawfinder-*.tar.gz cd flawfinder-*Then install. Typically you would do this (omit "sudo" if you are root):
sudo make prefix=/usr installYou can override these defaults using standard GNU conventions. If you omit the "prefix=/usr" statement, it will store in the default directory /usr/local. You can set bindir and mandir to set their specific locations. Cygwin systems (for Microsoft Windows) need to set “PYTHONEXT=.py” in the make command, like this:
sudo make PYTHONEXT=.py install
See the installation instructions for more information.
Roy Ben Yosef reports that the
the simplest way to run Flawfinder under windows is using Python directly.
Install Python 2 (version 2.7).
and run the flawfinder script (on the command line).
C:\Python27\Python.exe flawfinder –H --savehitlist=ReportFolder\hitReport.hit C:\MySourcesFolder
In the above example you can Inspect the results (hit file and html report) in the ReportFolder.
Flawfinder is now hosted on SourceForge. You can discuss how to use or improve the tool on its mailing list, and you can see the latest drafts on the Subversion version control system.
If you have a general question or issue, use the mailing list. If you have a specific bug, especially if you have a patch, use git or the issue tracker.
Flawfinder is not a sophisticated tool. It is an intentionally simple tool, but people have found it useful.
Flawfinder works by using a built-in database of C/C++ functions with well-known problems, such as buffer overflow risks (e.g., strcpy(), strcat(), gets(), sprintf(), and the scanf() family), format string problems ([v][f]printf(), [v]snprintf(), and syslog()), race conditions (such as access(), chown(), chgrp(), chmod(), tmpfile(), tmpnam(), tempnam(), and mktemp()), potential shell metacharacter dangers (most of the exec() family, system(), popen()), and poor random number acquisition (such as random()). The good thing is that you don’t have to create this database - it comes with the tool.
Flawfinder then takes the source code text, and matches the source code text against those names, while ignoring text inside comments and strings (except for flawfinder directives). Flawfinder also knows about gettext (a common library for internationalized programs), and will treat constant strings passed through gettext as though they were constant strings; this reduces the number of false hits in internationalized programs.
Flawfinder produces a list of “hits” (potential security flaws), sorted by risk; by default the riskiest hits are shown first. This risk level depends not only on the function, but on the values of the parameters of the function. For example, constant strings are often less risky than fully variable strings in many contexts. In some cases, flawfinder may be able to determine that the construct isn’t risky at all, reducing false positives.
Flawfinder gives better information - and better prioritization - than simply running “grep” on the source code. After all, it knows to ignore comments and the insides of strings, and it will also examine parameters to estimate risk levels. Nevertheless, flawfinder is fundamentally a naive program; it doesn’t even know about the data types of function parameters, and it certainly doesn’t do control flow or data flow analysis (see the references below to other tools, like SPLINT, which do deeper analysis). I know how to do that, but doing that is far more work; sometimes all you need is a simple tool. Also, because it’s simple, it doesn’t get as confused by macro definitions and other oddities that more sophisticated tools have trouble with. Flawfinder can analyze software that you can't build; in some cases it can analyze files you can't even locally compile.
Not every hit is actually a security vulnerability, and not every security vulnerability is necessarily found.
As noted above, flawfinder doesn’t really understand the semantics of the code at all - it primarily does simple text pattern matching (ignoring comments and strings). Again, it doesn't do data flow or control flow analysis at all. Nevertheless, flawfinder can be a very useful aid in finding and removing security vulnerabilities.
The documentation points out various security issues when using the tool. In general, you should analyze a copy of the source code you’re evaluating. Also, do not load or diff hitlists from untrusted sources. Hitlists are implemented using Python’s pickle facility, which is not intended for untrusted input.
First, create a “unified diff” file comparing the older version to the current version (using git diff, GNU diff with the -u option, or Subversion’s diff). Then run flawfinder on the newer version, and give it the --patch (-P) option pointing to that unified diff.
This works because flawfinder will do its job, but it will only report hits that relate to lines that changed in the unified diff (the patch file). Flawfinder will read the unified diff file, which tells flawfinder what files changed and what lines in those files were changed. More specifically, it uses “Index:” or “+++” lines to determine the files that changed, it uses the line numbers in “@@” regions to get the chunk line number ranges, and it then uses the initial +, -, and space after that to determine which lines really changed.
One challenge is statements that span lines; a statement might start on one line, yet have a change that adds a vulnerability in a later line, and depending on how the vulnerability is reported it might get chopped off. Currently flawfinder handles this by showing vulnerabilities that are reported one line before or after any changed line - which seems to be a reasonable compromise.
Note that the problem with this approach is that it won’t notice if you remove code that enforces security requirements. Flawfinder doesn’t have that kind of knowledge anyway, so that’s not a big deal in this case.
An example of horrific tool misuse is disabling vulnerability reports without (1) fixing the vulnerability, or (2) ensuring that it is not a vulnerability. It’s publicly known that RealNetworks did this with flawfinder; I suspect others have misused tools this way. I don’t mean to beat on RealNetworks particularly, but it’s important to apply lessons learned from others, and unlike many projects, the details of their vulnerable source code are publicly available (and I applaud them for that!). As noted in iDEFENSE Security Advisory 03.01.05 on RealNetworks RealPlayer (CVE-2005-0455), a security vulnerability was in this pair of lines:
char tmp; /* Flawfinder: ignore */ strcpy(tmp, pScreenSize); /* Flawfinder: ignore */
This means that flawfinder did find this vulnerability, but instead fixing it, someone added the “ignore” directive to the code so that flawfinder would stop reporting the vulnerability. But an “ignore” directive simply stops flawfinder from reporting the vulnerability - it doesn’t fix the vulnerability! The intended use of this directive is to add it once a reviewer determined that it was definitely a false positive, but in this case the tool was reporting a real vulnerability. The same thing happened again in iDefense Security Advisory 06.23.05, aka CVE-2005-1766, where the vulnerable line was:
sprintf(pTmp, /* Flawfinder: ignore */And a third vulnerability with the same issue was reported still later in iDefense Security Advisory 06.26.07, RealNetworks RealPlayer/HelixPlayer SMIL wallclock Stack Overflow Vulnerability, aka CVE-2007-3410, where the vulnerable line was:
strncpy(buf, pos, len); /* Flawfinder: ignore */
This is not to say that RealNetworks is a fool or set of fools. Indeed, I believe many organizations, not just RealNetworks, have misused tools this way. My thanks to RealNetworks for publicly admitting their mistake - it allows others to learn from their mistake! My specific point is that you can’t just add comments with “ignore” directives and expect that the software is suddenly more secure. Do not add “ignore” directives until you are certain that the report is a false positive. More generally, developers need to understand what they're trying to do (learn the basics of developing secure software), and then tools can help them achieve their goals.
This kind of problem can easily happen in organizations that say “run scanning tools until there are no more warnings” but don’t later review the changes that were made to eliminate the warnings. If warnings are eliminated because code is changed to eliminate vulnerabilities, that’s great! General-purpose tools scanning like flawfinder will have false positive reports, though; it’s easy to create a tool without false positives, but they’ll do that by failing to report many possible vulnerabilities (some of which will really be vulnerabilities). The obvious answer if you want a broader tool is to allow developers to examine the code, and if they can truly justify that it’s a false positive, document why it is a false positive (say in a comment near the report) and then add a “Flawfinder: ignore” directive. But you need to really justify that the report is a false positive; just adding an “ignore” directive doesn’t fix anything! Sometimes it’s easier to fix a problem that may or may not be a vulnerability, instead of ensuring that it’s a false positive - the OpenBSD developers have been doing this successfully for years, since if complicated code isn’t an exploitable vulnerability yet, a tiny change can often turn such fragile code into a vulnerability.
If you’re in an organization using a scanning tool like this, make sure you review every change caused by a vulnerability report. Every change should be either (1) truly fixed or (2) correctly and completely justified as a false positive. I think organizations should require any such justification to be in comments next to the “ignore” directive. If the justification isn’t complete, don’t mark it with an “ignore” directive. And before developers even start writing code, get them trained on how to write secure code and what the common mistakes are; this material is not typically covered in university classes or even on the job.
The “ignore” directives are a very useful mechanism - once you have done the analysis, having to re-do the analysis for no reason could use up so much time that it would prevent you from resolving real vulnerabilities. Indeed, many people wouldn’t use source scanning tools at all if they couldn’t insert “ignore” directives when they are done. The result would be code with vulnerabilities that would be found by such tools. But any mechanism can be misused, and clearly this one has been.
Flawfinder does include a weapon against useless “ignore” directives - the --neverignore (-n) option. This option is the “ignore the ignores” option - any “ignore” directives are ignored. But in the end, you still need to fix vulnerabilities or ensure that reported vulnerabilities aren’t really vulnerabilities at all.
Another problem is that if a tool tells you there’s a problem, never fix a bug you don’t understand. For example, the Debian folks ran a tool that found a purported problem in OpenSSL; it wasn’t really a problem, and their fix actually created a security problem.
More generally, I am not of the opinion that analysis tools are always “better” than any other method for creating secure software. I don’t really believe in a silver bullet, but if I had to pick one, “developer education” would be my silver bullet, not analysis tools. Again, a “fool with a tool is still a fool”. I believe that when you need secure software, you need to use a set of methods, including education, languages/tools where vulnerabilities are less likely, good designs (e.g., ones with limited privilege), human review, fuzz testing, and so on; a source scanning tool is just a part of it. Gary McGraw similarly notes that simply passing a scanning tool does not mean perfect security, e.g., tools can’t normally find “didn’t ask for authorization when it should have”.
That said, I think tools that search source or binaries for vulnerabilities usually need to be part of the answer if you’re trying to create secure software in today’s world. Customers/users are generally unwilling to reduce the amount of functionality they want to something we can easily prove correct, and formally proving programs correct has not scaled well yet (though I commend the work to overcome this). No programming language can prevent all vulnerabilities from being written in the first place, even though selecting the right programming language can be helpful. Human review is great, but it’s costly in many circumstances and it often misses things that tools can pick up. Execution testing (like fuzz testing) only checks a miniscule part of the input space. So we often end up needing source or binary scanning tools as part of the process, even though current tools have a HUGE list of problems.... because NOT using them is often worse. Other methods may find the vulnerability, but other methods typically don’t scale well.
Oh, a quick aside: “A fool with a tool is still a fool” is not a new phrase. I did a little research on its history; as best as I can determine, it is a quote attributed to Ron Weinstein, an eminent academic pathologist, in The American Review of Respiratory Disease (1989) Vol. 139. p. 1280, "Perestroika, Fashion, and the Universal Glue" by William M. Thurlbeck.
When you think about it, that makes sense. If a program has a high hit density, it suggests that its developers often use very dangerous constructs that are hard to use correctly and often lead to vulnerabilities. Even if the hits themselves aren’t vulnerabilities, developers who repeatedly use dangerous constructs will sooner or later make the final mistake and allow a vulnerability. It’s like a high-wire act -- even talented people will eventually fall if they walk on it long enough.
This appeared to break down on very small programs (less than 10K); a program much smaller than its competition might have a larger hit density yet still be secure. I speculate that because density is a fraction, when a program is much smaller than its rivals, density is dramatically forced up (because size is in the denominator). Yet programs that are made dramatically smaller are much easier to evaluate directly, so direct review is more likely to counter vulnerabilities in this case.
Until the time where we’ve figured out how to merge these dissimilar projects, I recommend that distributions and software development websites include both programs (and others as well!). Each has advantages that the other doesn’t. For example, at the time of this writing Flawfinder is easier to use - just give flawfinder a directory name, and flawfinder will enter the directory recursively, figure out what needs analyzing, and analyze it. Other advantages of flawfinder are that it can handle internationalized programs (it knows about special calls like gettext(), unlike RATS), flawfinder can report column numbers (as well as line numbers) of hits, and flawfinder can produce HTML-formatted results. The automated recursion and HTML formatted results make flawfinder especially nice for source code hosting systems. The flawfinder database includes a number of entries not in RATS, so flawfinder will find things RATS won’t. In contrast, RATS can handle other programming languages and runs faster. Both projects are essentially automated advisors, and having two advisors look at your program is likely to be better than using only one (it’s somewhat analogous to having two people review your code for security).
Many have reviewed flawfinder or mentioned flawfinder in articles, as well as related tools. Examples include:
A small article describing the results of examining Windows 2000, Linux and OpenBSD source code using Flawfinder. “OpenBSD is ahead, Flawfinder finds a surprisingly small number of potentially dangerous constructs. The source code audit by the OpenBSD team seems to pay out. Additionally, OpenBSD uses the secure strlcpy/strlcat by Todd C. Miller instead of strcpy etc.”
Practical Code Auditing by Lurene Grenier (December 13, 2002) briefly discusses simple approaches that can be performed for manual auditing (she works on the OpenBSD project). It does note that you can “grep” for certain kinds of problems; flawfinder is essentially a smart grep that already knows what to look for, so it could easily fit into process at those points. The paper also specifically notes some of the things that are hard to grep for (which are the kinds of things that flawfinder would miss).
Secure Programming with Static Analysis (Addison-Wesley Software Security Series) by Brian Chess and Jacob West discusses static analysis tools in great detail.
Of course, there are many programs that analyze programs, particularly those that work like “lint”. There is a set of papers about the Stanford checker which you may find interesting.
For more information about other static analysis tools for security, see Static analysis tools for security
You might want to look at my Secure Programming HOWTO web page, or some of my other writings such as Open Standards and Security, Open Source Software and Software Assurance (Security), and High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS).
You can also view my home page.