These are my best efforts to answer the questions sent during the webinar (conference) on February 11, 2008. These are not legal advice, and I am not a lawyer; for legal advice, see your counsel, after making sure that he or she understands open source software legal issues (most do not). But I hope these answers will help you in some way.
In some areas the answers are simple and well-known, yet hard to find. In other areas, the answers are not simple or are unknown. Some are difficult issues where there doesn't seem to be a consistent or generally-accepted Government position or interpretation. I'll try to explain what the issues are, give my best understanding, and (where I can suggest it!) how to stay away from the unknowns. Some of these questions raise interesting points that I think need to be officially answered; I hope this will be a start. I've grouped the questions into these areas:
Here, "OSS" means "open source software", which is the usual DoD term for Free-Libre / Open Source Software (FLOSS). OSS is software that meets OSI's "Open Source Definition" or the FSF's "Free Software Definition" (and typically meets both definitions). See my slides for more information. If there are errors, please contact me so I can correct them.
Please note that my later article “Publicly Releasing Open Source Software Developed for the U.S. Government” gives a lot more information on when the government and its contractors can release, as OSS, software developed using government funds.
Are these slides available for download?
Yes, at https://dwheeler.com/oss-dod-webinar2008.html.
Can you supply a URL for a good OSS license compatibility chart?
Check out my FLOSS license slide at https://dwheeler.com/essays/floss-license-slide.html. If you want more detail, you might look at the FSF's document "Various Licenses and Comments about Them" and the Fedora Licensing page.
In addition, here are links to some of the other articles I mentioned in my talk:
What does the 'FL' in FLOSS stand for?
Free-libre or Free/Libre. So FLOSS stands for "Free-Libre / Open Source Software" or "Free / Libre / Open Source Software". There's a story behind this terminology, so I may as well tell it.
An older and still widely-used term for open source software is "Free software". Unfortunately, in English the word "free" is very ambiguous. Many languages have (at least) two different words that translate to English "free": one to indicate "no cost" (such as Spanish/Portuguese/Dutch "gratis") and one that means "freedom of control by another" (such as Spanish/French "libre"). The intended meaning of "Free software" is the "libre" sense, as in "free speech" or "free market". But many people thought that "Free software" meant "no cost software" (gratis), which was expressly not intended; after all, people sell Free software! One solution is to append "-libre" to the word "free", to make it clearer which sense is meant; another is to just use a word like libre instead ("libre software").
This makes FLOSS a nice short-hand abbrevation that includes the many different names used for this kind of software: Free Software, free-libre software, libre software, or open source software.
When I give presentations to DoD audiences, I tend to use the term "open source software" (OSS), because that's the primary term used in the DoD. When I give presentations to other audiences, I tend to use FLOSS, because it's a more inclusive term that includes the various terms actually used by various groups.
What does COTS stand for again?
Commercial Off The Shelf. For example, COTS software and COTS hardware. Microsoft Windows and Mozilla Firefox are examples of COTS software (they are proprietary software and open source software respectively). My essay "FLOSS is Commercial Software" explains, in great detail, why essentially all OSS is commercial, and why essentially all extant OSS is COTS.
To use OSS, does it need to be on the Department of Defense Intelligence Information System (DODIIS) approved list?
The DoD has different rules for different kinds of systems and different uses, but in practically every case the rules have nothing to do with whether or not the program is OSS. So the question is really, "to use some program, does it need to be on approved list X?" The answer is "it depends on the circumstance". So find out the rule for installing a proprietary COTS program for your circumstance, and follow the same rules when you wish to install an OSS COTS product. In some cases there's secure installation guidance; see DISA's Security Technical Implementation Guides (STIGs) and NSA's Security Configuration Guides. Many OSS programs are already on these lists. In some cases you may need to add the program to the approved list for your circumstance, so you'll need to follow the process for getting the program on that list. In some cases it's there but not obvious (e.g., the Linux kernel and many other OSS components are covered by the Unix STIG).
Remember that OSS always (by definition) permits use for any purpose, as well as redistribution of the program without additional payment. That means that, by definition, the DoD always has an enterprise-wide license for the use of any OSS program. (Support is a different tale - if you want 24x7 phone support, you'll need to pay for it. But I covered that in the talk.)
Can you explain why the DoD and Federal government doesn't use Linux and OpenOffice.org for desktop use?
Actually, they do. I receive many OpenDocument files, and I believe most of them were created using OpenOffice.org. Linux is widely deployed on servers, of course; desktops are less common, but they are certainly out there. There's a fair chance that some of the people you know who are federal employees, or who are contractors that support the federal government, are using Linux and/or OpenOffice.org on their desktop. That's especially likely if they do software development and/or work in computer security. For example, my desktop runs on Linux, and I used OpenOffice.org to create the presentation. You typically can't tell unless you visit their office!
But that said, clearly Linux and OpenOffice.org are used less often than they could be. Desktop use is very difficult to change, and takes a long time to do any transition for a variety of reasons. For example, if a user depends on 50 applications, then they must have reasonable replacements for all 50 before that user can switch. In addition, all of this is very recent. Linux on the desktop only became viable in 2002, which was when the basic desktop functionality became useful: Mozilla released version 1.0 of their suite (for web browsing and email), and the first useful OpenOffice.org was released (for office documents). In addition, it took until around 2005 for the Linux desktop to be significantly refined for it to be a really competitive alternative (e.g., Firefox web browser, Thunderbird email, etc.). Torvalds has useful comments on this. Since it's only been a few years that it's been really competitive, and it takes a long time to change, this is hardly surprising.
Most organizations who are interested in OSS on the desktop, but have existing deployments, do replacements from the "outside in". That is, they will do simple switches such as using Mozilla Firefox on Windows, then start using OpenOffice.org on Windows, and only later use Linux for the desktop. Over the years I expect Linux desktop deployments to become much easier to do, especially as applications become web-enabled (so it doesn't matter which operating system is used by the user).
If comparative metrics exist for cost or TCO of OSS and proprietary software, what do these metrics indicate, (in terms of % cost saved, etc)?
Unfortunately, total cost of ownership (TCO) and return on investment (ROI) calculations are very sensitive to the specifics of the circumstance, such as functionality required, scale, existing workforce, environment, processes they must support, and so on. So while TCO and ROI are excellent tools for making a particular decision for a particular circumstance, a TCO or ROI calculated in one circumstance may not be the same in a different circumstance.
That said, there are a number of figures that show that in a vast number of circumstances, OSS have significantly lower TCO and higher ROI. Why Open Source Software / Free Software (OSS/FS, FLOSS, or FOSS)? Look at the Numbers! has a few such figures; here's a sampling. Forrester Research found that the average savings on TCO when using OSS database management systems (DBMSs) is 50%. Cybersource’s 2004 study found TCO savings from 19% to 36% when using Linux-based OSS approaches compared to Microsoft Windows, depending on various factors. A 2001 InfoWorld survey of Chief Technical Officers (CTOs) found that 32% were reporting a savings exceeding $250,000/year from OSS, and 60% reported saving over $50,000. An EU Study, "Economic impact of open source software on innovation and the competitiveness of the Information and Communication Technologies (ICT) sector in the EU" (November 20, 2006) said, “Our findings show that, in almost all the cases, a transition toward open source reports of savings on the long term...". That study noted that "Costs to migrate to an open solution are relevant and an organization needs to consider an extra effort for this. However these costs are temporary and mainly are budgeted in less than one year...".
Typically use of the software itself is low or no price; what you pay for is installation, transition, training, and ongoing support. Which is not free, but is compete-able, and competition tends to drive prices down.
Again, you need to figure out the TCO / ROI for your specific circumstance.
Here are some questions on contracting - but before I answer them, let me give some background on government contracts for software. In the U.S. federal government, the Federal Acquisition Regulation (FAR) governs the drafting and negotiation of most government contracts and provides default contract clauses. In the U.S. DoD, this is heavily supplemented (overridden) by the Defense Federal Acquisition Regulation Supplement (DFARS). There are a lot of differences between the FAR and DFARS when it comes to software contracting; they even define the term "computer software" differently. I will primarily cover the DFARS, though I will make a few comments about the FAR. A lot can be negotiated, so you have to look at the specifics of a particular contract; the text here discusses the typical "default" case for a DoD contract developed using the DFARS (as of February 15, 2008).
DFARS section 227.72xx, gives instructions to contracting officers about computer software, including how to use the various standard clauses in DFARS section 252.227-72xx. In particular DFARS section 227.7203-6 tells officers to (normally) use DFARS contracting clause 252.227-7014 "in solicitations and contracts where the successful offeror(s) will be required to deliver computer software or computer software documentation".
Contracting officers should ensure that the software and its documentation are contract deliverables - be wary if the contract requires development of software using government funds, but doesn't require delivery of that software or documentation to the government! In some research contracts the contracting officer may determine, with counsel, to use a variation called "Alternate I" (which give fewer rights to the government), but this is rare; there's a lot more money in procurement than in research. Besides, the concessions made by "Alternate I" are unlikely to be necessary, now that OSS approaches are a viable alternative. There are various other special-case clauses, for example, for SBIR contracts. The text below ignores these special cases and describes the "normal default" case: a contract between the DoD and a contractor that includes DFARS contract clause 252.227-7014, without the specially-negotiated licenses that DFARS 227.7203-5(d) allows.
When a DoD contractor is developing new software as a deliverable in a typical DoD contract, can that software include OSS?
Yes, that software can; in nearly all cases pre-existing OSS are "commercial components", and their use is governed by the rules of including commercial components. Indeed, the use of commercial components is generally encouraged, and when there are commercial components, the government expects that it will normally use whatever license is offered to the public.
Unfortunately, I've found it difficult to be completely confident on whether or not government approval is first required to include "commercial components" in -7014 contract deliverables. Contractors should talk with their counsel to see if they need government approval to include commercial components, and follow those rules for OSS too. I suspect that MIT and BSD-licensed OSS probably don't require government approval no matter what interpretation you use, because they are essentially the same as "unlimited rights".
Contractors can comply with the strictest interpretation by ensuring that their deliverables don't directly include commercial items (ask the government to get them separately), listing in their proposals the commercial components (including OSS) that they intend to include in their deliverables, or getting approval when they want to add more commercial components. Many of the most expert people I've talked to believe this is quite unnecessary, and that contractors don't need government permission to include commercial components in contract deliverables.
Below are more details to explain this.
OSS is Commercial
First, the easy part. Extant OSS components are almost always "commercial components", because nearly all are "used for non-government purposes" and are "licensed to the public". In fact, OSS are commercial items (as long as they have non-government uses) even if they've only been offered for license to the public, will be released (on time to satisfy delivery requirements), or require minor modification [see DFARS 252.227-7014(a)(1)]. DoD policy is that "Commercial computer software... shall be acquired under the licenses customarily provided to the public unless such licenses are inconsistent with Federal procurement law or do not otherwise satisfy user needs." [DFARS 227.7202-1, -3, and -4], and this policy applies to both OSS and proprietary commercial computer software.
The normal rules for including third party commercial software in a DoD deliverable are described in DFARS 252.227-7014(d). Sadly, they are absurdly confusing. In particular, do contractors have to get government permission to include commercial software? I asked four lawyers I respect; two said yes, and two said no. That's shameful; the DFARS was modified in 1995 specifically to clarify the role of commercial software, and since smart people differ on its interpretation, it obviously failed to do so. Since I can't seem to give a simple answer here, let's look at what the alternatives might mean... not as legal advice, but so that you can understand the issues depending on how DFARS -7014(d) is interpreted.
Permission Required?
Clause DFARS 252.227-7014(d) could be interpreted as "government permission is required before including commercial software", as 50% of the lawyers I polled believed. Indeed, CENDI's "Frequently Asked Questions About Copyright: Issues Affecting the U.S. Government" question 4.4 seems to clearly say that government approval is required.
If so, approval doesn't need to be onerous. It could be gained by listing the planned inclusions and dependencies in the proposal, and then updating it using the same process as -7017 defines for noncommercial software.
If -7014(d) is interpreted this way, MIT and BSD-licensed software probably don't require government permission, because they grant the same rights as "unlimited rights", and that seems exempt. You could even argue that any OSS licensed software gives so many rights that the permission shouldn't be required, though that's dangerous ground; best to get permission in those cases. In general, if the government would permit a proprietary commercial program to be used for some function, then it should normally permit an OSS commercial program to be used - for the same reasons. Indeed, the government should prefer OSS COTS components, because OSS components grant the government far more rights than proprietary COTS components.
Indeed, it'd be sensible if the contractor had to get approval to include anything in a deliverable with anything other than "unlimited rights". Getting and tracking approval requires effort, but since the government has to live with the result, it would be sensible for the buyer (the government) to require any approval of such decisions first. Indeed, Intellectual Property: Navigating through Commercial waters page 2-6 admits that "license rights may significantly impact the acquisition plan". In December 2007, David Emery reported that the DoD has had very positive experiences in a project with an explicit clause requiring approval before including commercial components (I get the impression it also required approval for depending on commercial components too). The purpose was not to prevent it, but to prevent undesirable consequences. The clause has been especially effective at preventing unnecessary costs and inflexibility from supplier lock-in. The government could grant blanket permission for BSD-new and MIT-licensed software, the LGPL licenses, or even all OSS licenses, since compared to typical proprietary software licenses they impose trivial downstream requirements and have a far lower risk of lock-in.
Since there are so many uncertainties in the standard clause, if the government intends to require such approval, it should state that as a separate clause.
Permission Not Required?
Clause DFARS 252.227-7014(d) could be interpreted as "government permission is not required to include commercial components", as 50% of the lawyers I polled believed. Indeed, the lawyer who seemed most knowledgeable about this specific text said that approval was not required, because this text had been overruled by later changes in the FAR (esp. section 12), which was done as part of Federal Acquisition Streamlining.
There's some external evidence for this interpretation. Intellectual Property: Navigating through Commercial waters notes that "It may be extremely difficult to determine whether the absence of a particular data/software deliverable ... is because it is being offered with unlimited rights, or because it is commercial data/software... To help identify and resolve these issues early, consider requiring a list of commercial data/software restrictions". These comments wouldn't make sense if government approval was necessary.
Under this interpretation, extant OSS COTS components (those licensed to the public with a non-government use) can be included whenever the contractor wishes, without requesting permission, just like other commercial components.
Is the GPL compatible with Government Unlimited Rights contracts, or does the requirement to display the license, etc, violate Government Unlimited?
The GPL and government "unlimited rights" terms have similar goals, but differ in details. This isn't usually an issue because of how typical DoD contract clauses work.
Any software that has a non-government use and is licensed to the public is commercial software - by definition. That includes OSS software, including software licensed using the GPL, so typical extant OSS is commercial software. Normally the government only expects to get the usual commercial rights to commercial software, and not "unlimited rights". So if the software displays a license in a way that can't be legally disabled, so be it. The same would be true if you used Microsoft Windows; you can't normally disable its rights-display functions either.
In contrast, the government normally gets "unlimited rights" only when it pays for development of that software, in full or in part. Software developed by government funding would typically be termed "noncommercial software", and thus falls under different rules.
If the deliverable includes COTS OSS, that's a special issue. See the previous question, which addresses most of that case. I frankly don't know the answer if you have a single program, linked together, where part is GPL and part is unlimited rights. There's nothing fundamentally impossible about having a program in two parts: One part developed with government funding, with unlimited rights, and the second part developed entirely with private funding, and thus having different rights.
The government has the right to take software it has unlimited rights to, and link it with GPL software. After all, the government can use unlimited rights software in any way it wishes.
Once the government has unlimited rights, it can release that software to the public in any it wishes - including using the GPL. This is not a contradiction; it's quite common for different organizations to have different rights to the same software. The program available to the public may improve over time, through contributions not paid for by the U.S. government. In that case, the U.S. government can choose to use the version to which it has unlimited rights, or it can use the publicly-available commercial version available to the government through that version's commercial license (the GPL in this case).
The short answer is "both the government and government contractors can release their results as open source software under the default DoD contract terms for software development, under certain conditions". The DoD's usual contract clauses for developing software (e.g., DFARS 252.227-7014) say:
These are the usual defaults; negotiations can change things, so read the contract to see if the contract changes these defaults. Below are some of the details - and there are a lot of details. I'm not a lawyer, so I can only give my best understanding, but I have checked with others who agree with this understanding.
Unlimited rights
When the government pays for software development (in full or in part), the terms "unlimited rights", "government-purpose rights", and "restricted rights" start showing up. When the government has "unlimited rights", it can release the software under OSS conditions. Let's see why that's so, and then we'll examine the conditions that enable the government to get unlimited rights.
DFARS 252.227-7014(a)(15) defines "unlimited rights" as "rights to use, modify, reproduce, release, perform, display, or disclose computer software or computer software documentation in whole or in part, in any manner and for any purpose whatsoever, and to have or authorize others to do so". In short, once the government has unlimited rights, it has essentially the same rights as a copyright holder, and can then use those rights to release that software as it sees fit. This isn't just my interpretation; "Technical Data and Computer Software: A Guide to Rights and Responsibilities Under Federal Contracts, Grants and Cooperative Agreements" by the Council on Governmental Relations (CAGR) notes that "This unlimited license enables the government to act on its own behalf and to authorize others to do the same things that it can do, thus giving the government essentially the same rights as the copyright owner." (For the rest of the U.S. federal government, look at FAR 52.227-14(2) and FAR 27.401 for related information.)
Thus, if the government receives unlimited rights, the government can, at any time, release that software to others (including everyone) as public domain, or under the terms of any OSS license - including the GPL. After all, the government not only has rights to use, modify, and so on, but it has the right to authorize others to do so. And if it has those rights, it also has the right to condition the terms of those rights, and that's enough be able to release software under any OSS license. So it can simply say "I authorize anyone to use this software under license X". I suppose someone could try to argue that transitive authorization isn't explicitly noted here, or that there's an implication that the "others" have to be listed, but I think that's grasping at straws - the very title of the rights is "unlimited", and it says "any manner" too. Once the government says, "anyone can have it or modify it", then there's no one who you can give the software to who isn't authorized to have it or modify it.
Of course, this is all conditioned on complying with other laws. There are other laws that inhibit release, such as classification, export control laws (like ITAR), patent, trademark, and so on. But if these other laws are met, the government can release software when it has unlimited rights to that software.
Can the contractor prevent this release? Once the government has unlimited rights, the answer is no. The contractor can try to convince the government that it'd be better for the government not to exercise the government's own rights, of course. But that is the government's decision to make, not the contractor's. If no government funds were used to develop the software component, then the issue of "unlimited rights" doesn't even occur. But if the contractor took the government's money, the government expects something in return. No one required the contractor to take the government's money, after all.
Now during contract negotiation, you can negotiate all sorts of things, and there are complex rules about adding restrictions when none were marked before. But once the government has unlimited rights, then it has the power to exercise them... else they wouldn't be unlimited!
When does the government get unlimited rights (so it can release as OSS)?
In the DoD, the government normally gets "unlimited rights" when the software was developed exclusively at government expense, per DFARS 252.227-7014(b)(1)(i). If the government partly paid for development of some software, under the DoD rules the government normally gets the more restrictive "government purpose rights" for 5 years, and after that the government has unlimited rights, per DFARS 252.227-7014(b)(2). Where possible, software developed partly by government funds should broken into a set of smaller components so the "who paid for it" rules can be applied separately to each one.
Before award, a contractor may identify the components that will have more restrictive rights, for the government to consider when deciding if it will award the contract to that offeror. Presumably, the government would prefer the proposals that give it more rights, so contractors that try to take government funds yet own all the rights may find that they don't get the contract at all. That list can be modified later, under certain conditions, but there are many rules which try to counter exploitation of the government by the contractor. This can get complicated; see DFARS 252.227-7014(e) through (h) for the gory details about this.
Those items that will have more restrictive rights must be marked. "Technical Data and Computer Software: A Guide to Rights and Responsibilities Under Federal Contracts, Grants and Cooperative Agreements" by the Council on Governmental Relations (CAGR) notes that "federal contract and some grant regulations, especially those of DOD and DOE, rigorously require that technical data or computer software be marked with a proper notice identifying all sections where the government has limited rights. If the restricted data or computer software is not appropriately marked in accordance with the contract regulations, the government by default obtains unlimited rights. Simply stated: proprietary data that are not marked properly, are lost." If they're not marked, the government has or will have unlimited rights.
One problem in the government is that the contracting officer may be far removed from the people who must deal with that officer's decisions. Some contractors exploit this, and ask for more restrictive rights that are very harmful to the government.. under the theory that the contracting officer won't notice or understand the implications. Contracting officers should make sure that they don't give away government rights before talking to representatives of those who would be affected by that decision. In addition, government employees should (where they can) make the contracting officer aware of their support intentions, so that the contracting officer doesn't give away the store.
Dayn T. Beam explains that if the government accepts less than unlimited rights, its actions could put it at significant risk, because this decision "may create a commercial monopoly. Such a monopoly might be disadvantageous to the [U.S. Government (USG)] if it wishes to establish a second source or to encourage competition. It is certainly contrary to stated policies for technology transfer from the USG to industry. Contracting Officers should proceed cautiously in such matters..." ["A Practical Guide for the Understanding, Acquiring, Using, Transferring, and Disposition of Intellectual Property by DoD Personnel" by Dayn T. Beam]
Of course, the software can only be released to the public as OSS if other laws are also met (such as classification, export control, patent law, and trademark law). For example, the government often doesn't receive a transferable patent license, so if there's a valid relevant patent, that may halt release unless other arrangements are made (or the patent expires). Trademark law is usually no problem; just remove the trademark (typically a name) so that people can differentiate between the release and whatever other product has the trademark.
Switching to the FAR briefly, most FAR contracts are based on FAR 52.227-14. By default, FAR 52.227-14(b) says that the government gets unlimited rights to data first produced in the performance of the contract and all other data delivered in the contract. By FAR definition, "data" includes "software", and "software" includes "source code", so this term "data" includes source code. However, the government loses those rights if it allows the contractor to assert copyright over the data. This loss of rights can happen if, for example, the government gives express written permission to do so (as described in FAR 52.227-14(c)) or if alternate IV has been included (which automatically gives away government rights to software it paid for). It'd be possible to claim that FAR 52.227-14(c)(1)(iii) says that the government never has unlimited rights to software, but in context that text seems to only apply when the government allows the contractor to assert copyright. This interpretation (that the government starts with unlimited rights, but loses those rights if it permits the contractor to assert copyright), is supported by Gary G. Borda's 2003 presentation "Government Data Rights Under the FAR". Government officials should be wary about giving away government rights like this; generally government officials should only do this if "copyright protection will enhance the appropriate dissemination or use of the data" (FAR 27.404-3(a)(2)). I would say that if the contractor agrees to release the software under a common OSS license, then that is more likely to meet this criteria. Also, as noted in Borda's presentation, the government may technically have rights but be completley unable to exercise them. In particular, the government needs to get the source code before it can do anything with it; all too often the government pays to have software develop but fails to get the source code it paid for. Government officials need to insist that they get the source code for the software that thay had developed. If they don't, they won't be able to compete future software improvements. A key purpose of the FAR is to "promote competition" (FAR 1.102(b)(1)(iii)), but such competition is thwarted when contractors are allowed to have sole possession of source code that the government paid to develop.
How can the government release "unlimited rights" software as OSS?
I hope lawyers create a "standard template" for such release. That way, it could be carefully vetted for long-term accuracy, and efforts be made to ensure that different components released by the government could typically be mixed together. They'll need to be careful. For example, don't say "worldwide" - because we're already sending government software off-planet!
I expect that lawyer-created release would look something like the following:
This software was originally developed under contract(s) {list}. The government holds unlimited rights to the original software, as defined in DFARS 252.227-7014(b)(1)(i), and releases it to everyone, without exception, ...and then you'd add one of these:
If you choose the second option, you can then fill in your OSS license. There would probably be additional clauses regarding ITAR, etc.
Why would the government release software as OSS?
The reasons vary, but they tend to be for the same reasons commercial companies do - because the government wants to lower sustainment costs (by spreading those costs among a larger group of users), gain innovation (from the larger group of users), encourage use of a standard or technology, and so on.
OSS release is also a great way to implement the government's obligations and commercialization policy. The U.S. government is obligated by its Constitution to work to "promote the general Welfare". To partly meet that obligation, the government has a policy of encouraging public access to government-developed technology through commercialization. OSS is a very efficient way of commercializing government-developed technology; it's hard to imagine a commercialization method that provides greater public access.
In some cases, OSS may be a poor sustainment choice because the technology must not be released to the public. In such cases, other approaches such as "gated source" may be better. I would love to see someone develop a set of criteria to decide when something should be gated source vs. OSS.
But beware: If the government chooses to not release something as OSS that should be OSS, there is the risk that someone else will develop and release that component as OSS. Through commercialization, that OSS is likely to quickly dominate the commercial market. The U.S. government will probably resist using the commercial product until the OSS product completely dominates, and the gated product is too obsolete to either compete or sustain. Eventually, the government will then have to pay a large sum to convert to the competing OSS product that is controlled by others and does not suit the U.S. government's requirements as well as it could have.
Alternative: Government holds copyright
Normally the contractor holds the copyright, but that's only the normal case. It's perfectly fine for a contractor to develop software, assert its copyright, and assign (give) the copyright to the government. The government can then release the software as OSS, using a traditional licensing scheme, with the government itself as copyright holder.
Some contracts use DFARS contract clause 252.227-7020 ("Rights in Special Works") for some or all deliverables. This clause applies to works "first created, generated, or produced and required to be delivered under this contract". Under this clause the government gets unlimited rights. What's more, DFARS 252.227-7020(c)(2) requires that on delivery "the Contractor shall assign copyright in those works to the Government" unless directed otherwise by the contracting officer. The notice that must be affixed to the work is “© (Year date of delivery) United States Government, as represented by the Secretary of (department). All rights reserved." Similarly, in a contract with FAR special works data rights clause 52.227-17, the Contracting Officer may direct the contractor to assign the copyright to the Government.
The release of WorkForce Connections / EZRO as a GPL'ed program was handled this way. That is, the contractor asserted copyright, and then assigned the copyright to the U.S. government. But that's quite rare; in the DoD, unlimited rights make this unnecessary. In the EZRO case, I suspect that the lawyers involved did not understand open source software (a common problem), and thus decided to use a complicated approach instead of the simple approaches intended by the regulations.
Contractor release as OSS
Can the contractor release software as OSS if it was partly or completely funded by the DoD? Yes, as long as they hold the copyright, and they usually do in DoD contracts.
This is an area where the DFARS text is not as clear as it should be. All business contracts I've ever read clearly state who eventually owns the copyrights, or least acknowledge who retains them, but this is not as clearly stated in the standard -7014 clause for DoD contractors. Still, it appears clear that DoD contractors normally own the copyright for any software they developed using government funding under the government contract, as well as any work they did (and funded) themselves. Under U.S. law, authors own the copyright to any written work they produce, unless a contract explicitly transfers the copyright. There is no such explicit transfer of copyright in -7014; in fact, DFARS 252.227-7014(b) states that "All rights not granted to the Government are retained by the Contractor". This makes it fairly clear that copyright ownership normally stays with the contractor. In addition, DFARS 252.227-7014(f) clearly states that contractors are permitted to add a notice of copyright to any computer software developed on the contract - and this doesn't require any permission or approval by the government. Such permission doesn't make sense unless the contractor has the copyright. So although it is not stated as clearly as many commercial contracts, contractors really do hold the copyright to works developed through government funding, normally... even if the government paid entirely for its development.
The contract can include other clauses that do transfer copyright to the government (such as -7020), but they need to be explicitly in the contract. Even when -7020 is in the contract, DFARS 227.7205(b) notes that contractors can use and disclose those works. I don't think those are enough rights to release the software as OSS, though, because it doesn't clearly give the right of modification to all. In any case, these kinds of clauses are unusual. So let's look at the normal case for DoD contracts, where the copyright is retained by the contractor.
Now, it's true that the government gets all sorts of rights to the works it pays to develop. But the government gets non-exclusive rights, even when it receives "unlimited rights", so there's no copyright law preventing a contractor from using its copyright to release the software to others under various licenses - including OSS licenses.
Again, the contactor still has to comply with other laws about release (classification, export control, patent, trademark, etc.). If they don't own the copyright themselves (maybe a third party does), then it's the copyright holder that normally has those rights.
For an interesting DoD-specific perspective on this, see Jim Stogdill's "Don't Just Use It: Build it (Building Open Source Software (OSS) in the DoD). Another interesting article is "The Economic Motivation of Open Source Software: Stakeholder Perspectives" by Dirk Riehle (IEEE Computer, vol. 40, no. 4 (April 2007). Page 25-32.).
The rules for the DoD are different than other departments'. The overarching Federal Acquisition Regulations (FAR) for the U.S. government include some interesting differences. FAR contract clause 52.227-14(c)(1)(i) says that the "The prior, express written permission of the Contracting Officer is required to assert copyright in all other data first produced in the performance of this contract." FAR 27.404-3(a) confirms this, explaining that the ability to assert copyright rights isn't automatic in general U.S. government contracts: "Generally, the contractor must obtain permission of the contracting officer prior to asserting rights in any copyrighted work containing data first produced in the performance of a contract" (except for technical/scientific articles). In short, "The contractor must make a written request for permission to assert its copyright in works containing data first produced under the contract... Generally, a contracting officer should grant the contractor’s request when copyright protection will enhance the appropriate dissemination or use of the data unless [certain conditions apply]". If the purpose of obtaining permission to assert copyright is to release it as OSS, that would certainly count as "enhancing dissemination", so that would be a strong rationale for asserting copyright. There are all sorts of conditions and exceptions; examine the clause for details. This means that under many government contracts, a contracting officer can refuse to grant to the contractor any permission to assert copyright rights! One unsurprising reason for this refusal to grant permission is that "The Government determines that limitation on distribution of the data is in the national interest". But the government might decide to refuse so that it can cause release of the software as OSS, too. It could decide to do so using at least two rationales: option C, "the data are of the type that the agency itself distributes to the public under an agency program", or option E, "The Government determines that the data should be disseminated without restriction". Note the hair-splitting here that drives non-lawyers crazy; a contractor may own the copyright, but the government can refuse to allow the contractor to assert its copyright. If the government does grant the contractor the right to assert copyright, then the government loses its unlimited rights, and then the government cannot (on its own) release the software as OSS. This loss of rights is noted in FAR 52.227-14(c)(1)(iii). This interpretation (that the government starts with unlimited rights, but loses those rights if it permits the contractor to assert copyright), is supported by Gary G. Borda's 2003 presentation "Government Data Rights Under the FAR". I don't have any evidence that this FAR text applies to typical DoD contracts.
This doesn't begin to cover the rules of other countries' governments. I happen to know that the Dutch government project NOIV determined that "European public administrations that want to use software that is offered for free, such as Open Source software, do not need to organise a call for tender", as part of its "The acquisition of (open-source) software" guidance for buyers in the public and semi-public sectors.
What does the government lose without copyright?
Under the current DoD regime, the contractor usually retains the copyright, not the government. There are exceptions (the government can become the copyright holder), but that's not the usual case. Instead, the government typically ends up with "unlimited rights" which are extremely broad. The government has sufficient rights to release software as OSS, for example, once it has unlimited rights.
Some legal eagles may have noticed a loophole in this approach, but I think it's not really a loophole. Because only the copyright owner can raise a copyright claim in U.S. court, you could theoretically argue that the government cannot enforce its rights. If this were actually true, a malicious developer could take software that was released by the government under an OSS license, and perform actions expressly forbidden by the license. The developer might reason that, "yes, I don't have permission to do this, but the government can't prosecute me". For example, if the government released software using the GPL (to ensure that future public versions were not proprietary), a developer could release a proprietary binary, and argue that their actions couldn't be prevented.
I think this argument is nonsense. If the government has the right to release software under some OSS license (such as the GPL), and it chooses to do so, it has the right to enforce the provisions of that license in court. It can’t sue to enforce the copyright it doesn’t have, or receive statutory copyright damages, but it can sue for breach of the license and, presumably, get injunctive relief to stop the breach and money damages to recover royalties obtained by breaching the license (and perhaps other damages as well).
The doctrine of unclean hands strikes the killing blow to anyone trying this "loophole", anyway. According to law.com, unclean hands is "a legal doctrine which is a defense to a complaint, which states that a party who is asking for a judgment cannot have the help of the court if he/she has done anything unethical in relation to the subject of the lawsuit. Thus, if a defendant can show the plaintiff had 'unclean hands,' the plaintiff's complaint will be dismissed or the plaintiff will be denied judgment." So if the government releases software as OSS, and a malicious developers performs actions in violation of that license, then the government's courts will not enforce any of that malicious developer's intellectual rights to that result. In effect, the malicious developer will lose all rights over the result, even the rights they would normally have had! Since OSS licenses are quite generous, the only license-violating actions a developer is likely to try is to release software under a more stringent license... and those will have essentially limited effect once everyone knows they cannot be enforced in court.
In short, the government can enforce its licenses, even when it doesn't have the copyright.
In closing
The general issue of how the FAR and DFARS clauses interact with OSS licenses is unfortunately something that there doesn't seem to be a lot of help with. Some things are clear - but only after a lot of work to find out - and others aren't. The folks who manage the DFARS are planning to release some guidance (in particular, to clarify that extant OSS is COTS), as part of case 2007-D012 - and that's great!! What's really needed is a set of official guidelines explaining under what conditions the government and contractors can release modified or new OSS, through various contract clauses, and why.
Are you familiar with the DFARS "Navigating IP Waters" guide? If so, is it a workable framework for capturing these OSS "IP 101" concepts?
Yes, I've read a lot of works on these topics, including Intellectual Property: Navigating through Commercial waters (sub-subtitled "Issues and solutions when negotiating intellectual property with commercial companies"). It's a good place to start if you want to understand some of the general framework for intellectual rights issues such as copyright, patents, trade secret, and trademarks. I especially recommend tables 2-2 and 2-3 (pages 2-13 and 2-14), which summarize a lot of complexity into something manageable. A nit is that it uses the term "intellectual property" in the title, a common legal term which I think is terribly misleading (knowledge does not have the same attributes as physical property!). More importantly, it omits and oversimplifies a lot of important issues - but that's probably inevitable given its goal of simplifying a complicated subject.
But the current (2001) version of the "Navigating" document is essentially useless in dealing with the open source software issues discussed in this talk. It never even mentions open source software, never mind the issues in using or developing open source software. There's no discussion about how to release contracts to develop open source software, nor contract language to do so. There's no text discussing and contrasting the most common OSS licenses, either. That's unfortunate, since even in 2001, OSS had already become significant in the IT industry in a large number of important markets (e.g., web servers and server operating systems). At the very least, it should be updated or have a new volume created to address OSS issues.
Here are a few other useful general documents on intellectual rights issues and government contracts. Unfortunately, they don't address OSS either. Also, be aware that the DoD's contracts are in many ways very different from the rest of the U.S. government's:
Is Technical Data that is available under a Creative Commons license considered open-source, even if it is not software, but can be used for software design?
Before I answer, let explain those terms (for those who don't already know them). The "Creative Commons" movement is an effort to apply OSS concepts to copyrightable works other than software, including books, music, and video. "Creative Commons" is also the name of the primary non-profit organization that works to encourage this. Unsurprisingly, there's a strong relationship between Creative Commons and OSS. For example, the Creative Commons non-profit "recommends and uses free and open source licenses for software", and the Open Source Initiative (who maintain the Open Source Definition) releases its web content using a Creative Commons license.
The answer is "it depends", primarily because there isn't a single "Creative Commons" license. Instead, there's a suite of Creative Commons licenses, based on various options that authors select. Some of those licenses are intended to meet the definition of an OSS definition (both the "Open Source Definition" and the "Free Software Definition") - but others clearly cannot. First, let me identify the easy cases where they clearly do not. The Creative Commons restriction "NoDerivatives" is fundamentally incompatible; by definition, an OSS license must permit modification and redistribution of that modification, and this restriction forbids changes. The Creative Commons restriction "NonCommercial" is also fundamentally incompatible; by definition, OSS must permit any use, including commercial use. (Indeed, "noncommercial" is difficult to define, a problem Eric Raymond notes.) So any Creative Commons license that includes the NonCommercial or NoDerivatives restrictions is not OSS.
The Creative Commons licenses with no restrictions, and the restrictions Attribution (you must attribute the authors) and ShareAlike, are intended to be completely compatible with the OSS definition. Historically there have been some questions on whether or not these Creative Commons licenses are OSS, and the Creative Commons organization has updated their licenses to respond to most of those concerns. I believe that the set of Creative Commons version 3.0 licenses with (at most) the Attribution and/or ShareAlike restrictions do meet the OSS definition, though there has been some muted debate on this point. For more information on some of these discussions and issues, as well as notes about the efforts the Creative Commons organization has undergone to maximize compatibility when it developed version 3, see the Creative Commons Version 3.0 Licenses — A Brief Explanation. That article mentions the "anti-TPM" argument of Debian-legal; while they are smart people, I think this particular argument doesn't hold water, especially since Debian has accepted other licenses with essentially the same clauses.
However, don't just start applying Creative Commons licenses to software. The Creative Commons FAQ explains, "Creative Commons licenses are not intended to apply to software. They should not be used for software. We strongly encourage you to use one of the very good software licenses available today. The licenses made available by the Free Software Foundation or listed at the Open Source Initiative should be considered by you if you are licensing software or software documentation. Unlike our licenses -- which do not make mention of source or object code -- these existing licenses were designed specifically for use with software. Creative Commons has 'wrapped' some free software/open source licenses with its Commons Deed and metadata if you wish to use these licenses and still take advantage of the Creative Commons human-readable code and Creative Commons customized search engine technology."
The Creative Commons organization does (in another place in its FAQ) recommend using Creative Commons licenses for software documentation. There is a potential issue with doing this: Some kinds of software documentation can contain of code, or code may contain a significant amount of documentation. In some development approaches, people actually merge design and code (e.g., using embedded code annotations to record the design documentation). Having incompatible licenses can potentially interfere with this. It's often wise to release any software documentation that closely relates to an OSS program using the same license as the source code -- at least as one of the license options (e.g., you could dual-license the documentation under a Creative Commons and a traditional OSS license).
What's the key differences between GPL version 2 and GPL version 3?
For a quick summary, I suggest looking at "A Quick Guide to GPLv3" by Brett Smith, published by the Free Software Foundation (FSF). After all, the FSF maintains the GNU General Public License (GPL).
To me, one of the key advantages of GPLv3 is that it is compatible with more licenses than GPLv2 was. In particular, there's a significant amount of software released under the Apache 2.0 license; both the Apache Software Foundation and the Free Software Foundation have modified their licenses (over time) to result in compatibility between them. It's also compatible with the Affero GPL.
GPLv3 also counters a trick Microsoft was starting to use to prevent competition from some OSS projects. Microsoft was offering discriminatory patent deals, which could have had very serious long-term consequences for OSS. Eben Moglen's explanation of "the be very afraid tour" explains this issue better than I can. The GPLv3 authors added language to GPLv3 that stopped this in its tracks. What's more, they did it in a clever way that in the long term may create some really serious legal problems for Microsoft. Microsoft claims that the changes in GPLv3 can't affect them, but to my knowledge they still have not given a legal justification for this claim. See the GPLv3 fourth draft rationale and this article for more about this.
GPL version 3 also clarifies a number of terms and edge cases. While that makes it a little longer, that also reduces uncertainty, and that's worth it. In particular, section 2 has new text that clarifies a common case in government contracting - it makes it 100% clear that hiring a contractor to do something for you (aka "work for hire") is considered identical to doing the work yourself under typical conditions. (For details, see the GPLv3 text beginning "You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works...".)
GPL version 3 may be more important outside the U.S.; GPL version 2 used some language based in U.S. law that made it harder to apply in legal systems that are significantly different than the U.S. legal system. For example, GPL version 2 used the word "distribute", but some jurisdictions used the word “distribute” in their own copyright laws but gave it different meanings. To solve this, they created a new term ("convey") not used in any copyright law, and then defined the new term in a way that is independent of the particular jurisdiction. Thus, GPLv3's word "convey" is essentially the same as GPLv2's "distribute", but the word change makes it easier to apply globally.
One of the more controversial areas is the FSF's express intent to prevent "Tivoization" in GPLv3, that is, using hardware mechanisms to prevent users from running modified versions of the software on that hardware... even though the manufacturer can make such changes. Some people, such as Linus Torvalds, believe that as long as people have access to the software their hardware runs, and can modify it to run on some other hardware, that is not a problem. Many others, such as Richard Stallman and Alan Cox, disagree; it's worth noting that Richard Stallman began his work after finding that he could not modify the software in a printer he was using.
I believe this last point reflects an underlying difference in goals. Torvalds is primarily interested in the freedom of a developer (in this case, to take modifications of the work and incorporate them into other works). Stallman is more interested in the freedom of the end-user to control the hardware that they depend on, which means that if the manufacturer can change the software, the end-user should be able to as well. It's not helped that there were many misunderstandings about the issues when the GPLv3 was being developed (e.g., the current text does not impede the use of cryptography to authenticate software, and you can still create voting machine equipment). Those misunderstandings created the appearance of issues even in cases where there was no issue. Still, it's clear that different developers have different goals, and those differences became clearer during GPLv3 development.
How do you see the landscape changing with the release of GNU GPL v3?
Not much, really - GNU GPL version 3 primarily maintains the status quo. There will be more cooperation between projects based on the Apache license and other projects, because it's easier to do. The GPL version 3 closes some loopholes, and clarifies some things, which is all very nice - but for most applications there is little significant difference. The most controversial aspect of GPLv3 (anti-Tivoization) is irrelevant to most software. Many of the changes prevent problems (such as discriminatory patent licensing) that were appearing on the horizon, to keep things going the way they were. I think there was a need for a GPLv3, but it was needed primarily so that things could stay on course given the radical changes in the surrounding environment.
If Public Domain (PD) software is developed by a government person, do we need any type of license to use it?
No, no license is required. If software is "public domain" (as defined by copyright law), then anyone is allowed to do anything they want to with the software. You can use it, modify it, and redistribute unmodified and modified versions wihout restriction. You can even create proprietary works with software in the public domain.
Software exclusively developed by U.S. federal government employee(s) as part of their official duties are considered "government works", and if they are made available to the public they must be available as public domain software in the U.S. Normally, this means it's public domain universally. There is one oddity; in theory, the U.S. government can apply for copyright protection of "government works" in other countries (see CENDI's "Frequently Asked Questions About Copyright: Issues Affecting the U.S. Government", question "Does the Government have copyright protection in U.S. Government works in other countries?"). However, when the U.S. government does this to U.S. works, it risks encouraging other countries to do the same, and that could be a problem for everyone. Having copyright only in foreign countries would only work if the U.S. government was willing to try to enforce its copyright in foreign courts while being unwilling to have anything to enforce inside the U.S., a peculiar arrangement. I've never heard of a copyright application for government works that are software, so this is unlikely to affect you, but that is a potential issue.
If a contractor gets paid by the government just like the salary of the government employee, is it Government produced software and thus public domain? | Slide 34 point 2 "does government employee" also include paid contractors working on government projects? | I believe the term government employee covers contractors getting paid to develop software for the government under contract [is that true?]
No, the law is very strict on this point. A contractor may be in the same room, and drink the same coffee, as a government employee. But if a contractor develops software for the government, it's a completely different set of rules than if a government employee develops software as part of their official duties.
U.S. law, specifically 17 USC 105, states that "Copyright protection is not available for any work of the United States Government". That term is further defined in 17 USC 101 as "a work prepared by an officer or employee of the United States Government as part of that person's official duties". Thus, if a work is solely authored by officers and employees of the U.S. federal government, it is not protected by copyright inside the U.S.
But if it was authored or co-authored by anyone else, then this law does not apply anyway. Instead, you switch immediately to a very complicated maze of rules, which even experts have trouble "navigating". Sorry, I wish it could be simpler.
What's worse, the government tends to be really bad at marking which works it releases are copyrighted - and which are not - leading to confusion even when it should be simple. The paper "Don’t Keep the Public Guessing: Best Practices in Notice of Copyright and Terms & Conditions of Use for Government Web Site Content" (CENDI 2004/4) has a discussion about the problem, and suggestions on what to do about it. It's not specific to software, but it's good to know about.
My government employees write code as part of their job. We want to contribute it to an OSS project that has, let's say a LGPL license. But our source has to be public domain (PD). How do you resolve this? Since government employees cannot copyright their work, how does a government organization (1) start and then (2) contribute to an OSS project?
Actually, this is not a problem. Government employees can contribute to an existing OSS project, even though whatever they write (as part of their official duties) is public domain. Public domain (PD) software can be combined with software written with any other license (see my FLOSS license slide). So you can take an OSS project's source code, make changes, and submit those changes back to the OSS project. The changes are, in isolation, public domain. Whenever you combine works into a single piece of software, you have to meet the conditions of all the licenses. In this case, the combined code uses the combined license of "public domain for the parts the government employee added plus the original OSS license for the unchanged code". But wait! Since public domain imposes absolutely no conditions, the combined license of the combined work is exactly the same as the OSS project's original license.. so the result is just the OSS project license, unchanged, for the combined result.
A simple analogy would be adding numbers together. Zero added to any other number "x" is the same as "x". Similarly, public domain (PD) imposes no requirements - it's like 0 when you add numbers. So when public domain code is combined with code under some other license x, the resulting whole program has license PD + x = x.
If anyone wants to, they could take just those public domain additions, and use them in different ways. For example, imagine that a government employee (as part of their job) added a major new function to some existing LGPL library, and released it back to the library project. The additions the government employee made are public domain, so anyone else (OSS or not) could take just those changes and integrate them into their project. That is perfectly legal - but it's often impractical. Changes to a program are usually very specific to a particular program, and typically can't be usefully used in isolation.
Oh, and a warning to those who notice that a government employee has submitted a change to an OSS project: Do not assume that they are public domain, even if a government employee is submitting them. Check the change to see if there's a declaration that it's public domain - and ask first if there isn't one! It's quite possible that the changes were not completely authored by government employees; and if any or all of the change was developed by non-government-employees, different laws come into effect.
I know of one weird case where someone was a government employee and worked with an OSS project as part of his job, left the government but worked as a government contractor and made further contributions, and eventually became a government employee again and made contributions as part of his job. If you found out later that he was a government employee, and then presumed that all of his contributions to that OSS project were in the public domain, you'd be dead wrong. Believe it or not, the OSS contributions have different legal statuses, even though they were all the same individual on the same project - precisely because it matters whether he was a contractor or a government employee at the time.
Now, if by "start" you mean "how can government employees start a whole new OSS project as part of their official duties", we have a different question. The short answer is, yes, you can create an OSS project. If U.S. government employees (only) start an OSS project, then as long as only U.S. government employees contribute to it as part of their official duties, the results will be public domain. In practice, what will eventually happen is that someone else will contribute to it. At that point, if the goal is to make it as similar to "public domain" as possible, I suggest having their contributions licensed under the MIT license, making the combined work under the MIT license. In theory, they could also release their changes as public domain, but the MIT license includes some mild protections against lawsuits that are valuable for those who are not government employees (who are protected in other ways). Those other people could use any other license as well, since public domain is compatible with any license.
I suppose I should note another oddness here. The OSI lawyers have historically been picky and said that "public domain is not a license, it's the absence of a license, and thus public domain is strictly speaking not an open source software license". They may be technically correct, but that's the sort of technicality that has no effect in the real world. "Public domain" works, from a legal point of view, like the world's most permissive license; it meets all the criteria for Free Software (the Free Software Definition) and for Open Source Software (the Open Source Definition). So let's just treat "public domain" as a license, since it operates exactly like one in the real world.
Does this mean that government employees developing software must make their software available to the public?
No, absolutely not, at least under most circumstances. Here's an obvious example: A cleared government employee may develop classified software; in such cases, without reclassification it would be illegal to release that software to the public.
But if that software is made available to the public, and it was exclusively developed by government employee(s) as part of their official duties, and it is made available to the public, it must be available as public domain software in the U.S. (e.g., the government cannot restrict its further use / modification in the U.S.).
An interesting case is if the software has been released through a Freedom of Information Act (FOIA) request. If someone requests software through FOIA, and the government determines that it should be released to them, then presumably it must be made available to the requestor. I personally believe that if software is released through a Freedom of Information Act (FOIA) request, and it was developed exclusively by a government employee, that the released software would then also be in public domain. And it appears that the requestor, on receipt, could release that software to the world as public domain software - there doesn't seem to be any legal mechanism to do otherwise! If that's true, then if such software were released through a FOIA request, it could be used as OSS. But I don't know of anyone who has made a clear ruling on this.
In any case, these situations are relatively rare. Most government-developed software is developed by contractors (at least in part), not by U.S. government employees, so these issues are relatively unlikely to apply.
Are there export / International Traffic in Arms Regulations (ITAR) issues with OSS? | Any guidance on how to reconcile OSS, DoD Products, and ITAR compliance? The requirements of some licenses (release of the source code) seem to be at odds with the regulations. | In some cases we really can't comply with ITAR and GPL. Sometimes we can get an export license to delivery binaries to foreign customer but not source code. If the binary must be distributed under the terms of the GPL, then we really can't meet the terms of the export license and the terms of the GPL simultaneously. Again, architecture and structuring can help mitigate this but it may not always be able to solve it.
There are answers to this. The simplest approach to comply with ITAR and working with OSS projects is as follows:
To explain this further, I'll need to give some background on U.S. export controls, particularly on ITAR. Here are the basics, as I understand them.
Export Control Basics
Under U.S. laws, there are two primary types of export controls. Controls on the export of commercial and "dual use" items (items that are intended for commercial use but can also be applied to military uses) are administered under the Commerce Department's "Export Administration Regulations" (EAR). Controls on "defense articles and defense services and related technical data" (including software) are administered under the State Department's International Traffic in Arms Regulations (ITAR). In both cases, "export" basically means "release to a non-U.S. person"; it's possible to violate these rules in your own U.S. office by giving information to a non-U.S. person. These rules are separate from the classification rules... people are required by law to follow ITAR and EAR as well as the classification rules. So even if the non-U.S. person has a clearance, that does not excuse you from the EAR and ITAR. For the U.S. DoD, ITAR tends to be more at issue than EAR, so I'll focus on ITAR (the issues are similar though).
ITAR is a set of United States government regulations that control the export and import of defense-related articles and services on the United States Munitions List. They implement the provisions of the Arms Export Control Act, and are described in Title 22 (Foreign Relations), Chapter I (Department of State), Subchapter M of the Code of Federal Regulations. ITAR requires that information and material pertaining to defense and military related technologies may only be shared with US Persons unless approval from the Department of State is received or a special exemption is used. The Department of State interprets and enforces ITAR.
The issue here is that unless the information is in the "public domain", you can only export certain kinds of information if you have export approval. "Public domain" here has a different meaning than with copyright law - it means information that is "published and which is generally accessible to the public" via various means, such as subscriptions, unlimited distribution at a meeting generally accessible to the public in the U.S., or public release (unlimited distribution) in any form after approval by the cognizant U.S. department (22 CFR section 120.11). Some kinds of software functions - in particular, cryptography - is treated specially carefully.
In many ways, dealing with ITAR rules and OSS is similar to dealing with classification rules, which I discussed in my talk. For unmodified COTS OSS, follow whatever rules are already in place. COTS OSS under a permissive license can be modified and still trivially comply with the ITAR rules simply by not releasing the modifications back to the original OSS project. But while it's easy to not release your changes back to the public OSS project, it's often a bad idea - remember, you are using OSS to do cost-sharing, and not releasing the modifications back is likely to cause your costs to dramatically rise over time. That's because as they improve the product, they won't take your changes into account, so you'll either have to maintain your modified OSS program yourself (expensive), try to "track" the OSS project (expensive), or give up maintaining it (resulting in obsolescence - eventually you won't be able to use the software as hardware and other software change). Protective licenses (like the LGPL and GPL) require you to deliver the source code to anyone who receives the binary, and not add additional restrictions to them - which at least on its surface means that you can't do that. (For the moment, we'll ignore walking at the edges of what is clearly legal - see below for more in the discussion about classification.) Perhaps that's just as well; you didn't want to do that in the first place.
Simple Approach: Public Release Review (with Opticks as example)
The simple solution, if you want to modify OSS and re-release those changes, is to make your system modular, just as I discussed in my presentation regarding classified components. Identify those few custom components that you need to specially protect, and separate them from the underlying infrastructure and "glue logic" components that are necessary to make it work but do not require special confidentiality. If you use strongly protective libraries/components (like the GPL), make sure that you do not link in material that you not be able to publicly release (data tables are often a good way of doing this). In most cases you should have identified a large set of general-purpose infrastructure components that do what you need, which can be OSS, and a few specialized components that are custom and classified and/or must not be exported.
Then, after you modify your OSS components internally, submit those changes to a public release review (through your government sponsor). If approved for public release, the changes can then be made public. If the COTS OSS project is on a public site (such as SourceForge, Savannah, and so on), then that's the time you can send your patches on to the main OSS project. If the COTS OSS project incorporates your changes, then other improvements made by the project will then be coordinated with your changes... meaning that you now have a reasonable sustainment approach.
For a specific example, you might look at how Ball Aerospace develops and releases Opticks. Some information is available through the Opticks news release and the BallForge site. Opticks is an open source remote sensing application and development framework. As an application, it "supports imagery, motion imagery, SAR, multi-spectral, hyper-spectral, and other types of remote sensing data". At its core it is a "remote sensing development framework" (licensed under LGPL). Some of the modules that are plugged into the framework are classified and/or controlled by ITAR, but the framework itself and some basic plug-ins are not. Indeed, there's great value to getting as many different programs as possible to use the same framework. Rather than set up their project on a public site like SourceForge, they decided to set up their own site, so that they could set up an ITAR-friendly process. When change submissions come in, only the submitter and Ball Aerospace (U.S. persons) can see those changes at first. Since those proposals aren't made to non-U.S. persons, there's no ITAR violation when submitters send their proposed changes (of the OSS components) to the Opticks project. Periodically Ball Aerospace submits the proposed changes to public release review; approved changes can then be merged into the public version and released.
Does public release review slow development down? You bet it does. In particular, you really want to have small incremental changes for efficient development - but the public release review processes encourage you to do the wrong thing, creating big sections for review instead. If you can, try to get approvals ahead-of-time for certain kinds of changes that you can anticipate. Also, work with the people who handle public release reviews, to find ways to handle larger numbers of smaller patches quickly. Generally, it will be easier if each patch comes with a brief description of what the patch does and why this is okay to release to the public. But in the end, laws are laws - ITAR can sometimes really impede development, but that's just the way it is.
A claimed (but debatable) alternative
I've been told about another approach to ITAR as well - but I'm not sure this is valid, so I would definitely talk in depth with counsel before doing it. This approach involves some legal subtleties with the phrase "further restrictions" that, to my knowledge, have not been officially ruled on. So, let's see what this possible approach is all about. Often the concern is about a specific situation: the idea of making changes to software under the GPL or LGPL, where the resulting software would be ITAR-controlled if the change is made. Perhaps the whole program is GPL'ed, or perhaps it links in a vital GPL library making the entire linked program GPL. Libraries using the GPL are relatively rare, but the majority of OSS applications are licensed under the GPL, so this is not an improbable case. Can you do this, and then give it to to other U.S. persons? Is that legal?
The key to this questions is that the GPL and LGPL do not permit you to impose new requirements on those you give software to. For example, the current version of the GPL (GPL version 3) section 10 says, "Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License... You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it."
Here's the argument in this approach, and it's subtle: The GPL forbids adding "any further restrictions" as noted above, but "you must obey the law" is obviously not a new restriction - that's always true! Since ITAR is the law of the land (and not merely a contractural agreement between parties), it's not a new restriction on anyone in the U.S. So the argument claims that you could take an LGPL or GPL'ed program, modify it, and give it to a U.S. person, without breaking export controls like ITAR. In practice, you would ask the U.S. person to sign a paper agreeing to follow U.S. law before they receive the binary. You could even require that they learn the basics of export control rules like ITAR, make it clear that the software is covered by those rules, and even confirm that they understand all this, before giving them the binary. Fundamentally, this is no different than paying for the binary of a GPL'ed program, which is explicitly permitted; it's okay to create a condition for receiving an initial binary. Those who receive the binary of a GPL'ed program have the right to ask for the source, but that's fine - give them the source.
The question is - have you created a further restriction, which is forbidden by the LGPL and GPL? This argument is that the answer is "no", you didn't add a new requirement - you just required someone to obey existing law. Now, what can the recipient do? Well, they can choose to give the binary to another U.S. person in the same way - but that is fine and doesn't violate ITAR. If they decided to export the binary and/or source of the software to a non-U.S. person, that may be a violation of the ITAR rules... but that is the fault of the exporter (who knew the rules), not of the person who created the unexportable modification in the first place. A non-U.S. person may ask for the source code, but if they didn't receive the binary, they have no right to the source, and the GPL does not obligate you to give the binary to anyone. Note, however, that you cannot put this binary or source in a place where non-U.S. persons can have access to it (until it's cleared for public release), so even if this is legally fine, you still have the problem that you have a limited pool of those who can maintain it. That will raise sustainment costs substantially compared to public release.
I should add that although this looks a little bit like the patent issues, they are not the same. The GPL and LGPL are rightfully concerned about the effect of patents on software. (Indeed, I'm personally of the opinion that allowing software patents was a horrific mistake, one that many other countries have wisely avoided. We already have laws protecting software - they're called copyright. There's excellent economic evidence that adding patents for software has harmed, not helped, U.S. competitiveness. But I digress.)
Is this argument valid? I'm not sure, so if you decide to use it, get a counsel ruling first. I think you're better off dividing your system into modules so that you can separate COTS components (both proprietary and OSS) from custom components, and then identify a small set of ITAR-controlled components.
Fedora's example of EAR compliance
I should note that OSS projects don't seem to have trouble complying with export controls, while still remaining OSS projects. For example, http://fedoraproject.org/wiki/Legal/Export displays the export control rules governing the Fedora Linux distribution (primarily the EAR in their case), yet it's also freely downloadable. This seems to actually be a variant of the debatable alternative I noted above - they release software to those they can legally release it to, and then tell them not to break the law by releasing it further.
You indicate it's possible to include [unchanged OSS programs in classified systems] however this tends to be a misleading statement (or perhaps I am confused). Linking in a library that is covered under the GPL (vs. LGPL) results in a derivative work even without making changes. This does not counter any of your arguments about structuring the code to segregate the classified and unclassified. It does seem to reinforce the misunderstanding about the license requirements of libraries distributed under the terms of the GPL (vs. LGPL). It certainly seems as if distribution of a resulting binary that is both classified and which must be distributed under the terms of the GPL is potentially problematic.
By "programs" I meant "application programs", not "libraries". Sorry if that was confusing to you! Again, it's not a problem to take an unchanged application program that is licensed using the GPL or LGPL, and placing it inside a classified system. People do that all the time with widely-used programs such as the Linux kernel.
Libraries can be the more complicated case, but often they aren't complicated either. If you have a library under the LGPL, and leave the library unmodified, the result is very similar. If a classified program links to an unmodified LGPL library, again, no issue. Ada programs compiled using GNAT are a good example; GNAT's runtime library uses a weakly protective license like the LGPL.
But even if they are modified, it's not simply the case that it's impossible. Both the LGPL and GPL merely require that if someone receives the binary, they must have the right to receive the source code, and to redistribute the binary and/or source under the same license.
If the LGPL library is modified, then the recipient of the modified LGPL library's binary must be able to get its source, and redistribute the source of the library. But often this is a non-problem; if the application using the library is classified, but the LGPL library is unclassified (e.g., it's a math library), there's no classification issue. The key here is that if you use an LGPL library, or a modified version of one, you need to be prepared to hand that library's source code to those you give the binaries to.
Now we finally reach the case you were concerned with: a library covered by the GPL (not the LGPL). These are actually relatively uncommon; while I don't have statistics handy, most libraries I see are covered by weakly protective or permissive licenses, not strongly protective licenses like the GPL. But if you do encounter one, then yes, you normally should not link the GPL-covered library into a larger classified program.
But note that this is only a limitation on directly linking the library into the program that is itself classified, and only when the GPL is the sole license. As I noted in my talk, the correct answer is "don't do that". For maintainability you should be dividing your application into modules anyway. In many cases, you can easily divide up the system into components so that the rare GPL'ed library is not directly linked into a classified work. (Through layers, representing it as data, separating components so they don't create a single linked executable at run-time, and so on.) Failing that, many companies offer dual licenses: the GPL, and a separate for-pay license that grants the ability to embed the library into a proprietary (or classified) work.
There are actually some interesting legal disputes here, which (to my knowledge) have no official government resolution.
First, there's an argument that you can make classified programs "in house" to the government. The authors of the GPL license have always actively worked to ensure that anyone can make "private changes", and not be forced into releasing source code to someone unless they've released its corresponding binary. As a result, the GPL limitations actually do not trigger on use, they only trigger when the software is distributed (the terminology of GPL version 2) or conveyed (the terminology of GPL version 3). The GPLv3 specifically enshrines an interpretation of work-for-hire that was not as clear in GPLv2, and this interpretation covers many typical government contracts. Under this interpretation, if a contractor makes modifications exclusively for the U.S. government, and returns those changes to the U.S. government, no distribution/conveying has occurred. Furthermore, it's quite reasonable to think that if the U.S. government develops a system, and then delivers it to other U.S. government personnel, it still has not actually distributed/conveyed anything... because it's still in-house. Indeed, for a classified system, you're quite limited in who you're allowed to give the system to! So under this interpretation, the U.S. government could have contractor modify an existing GPL program to add classified material, get it back, have another contractor make further changes (and get it back), deliver the final system to another government employee (including military personnel)... and never have distributed/conveyed in way that triggers these other GPL conditions. That's because under this interpretation, these were all "in house" private changes.
I think that this interpretation is valid - but it has a major catch. The problem is that the U.S. government could never make a sale, lease, or transfer to anyone else (say an ally) - because then it would count as distribution/conveying, and they could not impose classification limits on that other party. And it's impossible to predict such things in advance. In the first Gulf War, Saddam Hussein threatened Israeli civilians, and the U.S. suddenly needed to provide PATRIOT anti-missile systems to Israel. So it's best to not use this approach often.
There's a second legal argument which is even more complex and uncertain: It may be always okay to link classified material into GPL programs, creating a "classified GPL program", because of a legal quirk. Again, the GPL does not require that you release source to someone, unless you've released the binary to them. The GPL also forbids adding new requirements to its license, but the GPL cannot (of course) contravene the laws of the land. So it's perfectly okay to remind recipients that they must obey all laws regarding the software; that would not be an additional requirement from the GPL's point of view. The argument is then as follows: If you take a GPL'ed program, add changes that make it classified, and give its binary to others with necessary clearances, you've broken no law... as long as the recipients have the right to get the source from you. They, in turn, can give that binary to others with the appropriate clearances, who can in turn demand the source. But because the laws of the land forbid handing classified material to uncleared personnel, none of these people can release the modified classified program to the public... so the program would stay both legal and classified. This is a very different circumstance from the usual rules involving commercial software, because classification rules are something created by governments for their own uses. Is this argument valid? I don't know. It seems like it shouldn't be valid, since it appears on the surface to go against the whole point of the GPL. Yet to my knowledge, no one with the authority to give official rulings about this has issued an interpretation. It's not even clear that this interpretation is good public policy, even if it turns out to be valid. I recommend staying far away from using this argument until a legal authority makes a public and definitive ruling on this. This is a case where many discussions with counsel would be absolutely required.
This is a good place to note a general important issue about OSS and the law, now that I've noted these knotty questions at the edge of current legal understanding. There are a vast number of legal circumstances where the answer regarding OSS is clear and simple, but many of the lawyers who are supposed to understand the issues do not know the answers. Not knowing something isn't too bad, all things considered - what's worse is that some of them get the easy cases completely and obviously wrong. We need to get the intellectual rights lawyers educated in a hurry! But there are some questions - such as the ones above - where the law appears to be murky. We desperately need wise lawyers in authoritative positions to (1) deeply study and analyze the issues, and and (2) make formal rulings so the rest of us will know what we are (and are not) allowed to do. I hope that there will be a future effort to identify the important situations where the law is not yet clear regarding OSS, and then work to make them clear through rulings and changes in law. Until then, it's not hard to use or develop OSS while avoiding legal complexities.
How are divergent threads of development resolved into a common base for future development?
The normal "divergent threads" (parallel development processes) caused by multiple different people simultaneously making changes are primarily handled by a software configuration management (SCM) process that uses SCM programs and key developers/integrators.
Decades ago, it was common for developers to "lock" the files they were about to change, so others could not change them. They would then modify those locked files over the course of days or weeks, and then store the new versions when they were done (unlocking those files). As systems got larger, and thus had more developers, this "locking" approach meant that development would often be endlessly delayed because developers were (1) waiting for their turn to get the lock, or (2) endlessly coordinating with the lock-holder to manually make their changes as well. If time and money were no object, this would not be a problem, but time and money always matter greatly.
Modern SCM systems (such as Subversion, git, Mercurial, Monotone, and even the ancient CVS) allow developers to make changes simultaneously, and then merge those changes later. Most of the times, the SCM systems can automatically merge the changes from different development threads. In a few cases there will be conflicts that cannot be automatically resolved; at that point a human must manually resolve the conflicts. There are ways to reduce the impact of this; for example, if the branches will be long-lived, humans may choose to set up explicit "branches" with the SCM to ease later integration. The strategy of allowing parallel development turns out to be much more effective overall, because it means that development time is (on the whole) substantially faster and takes less effort. The "OODA loop" is so much more effective that lock-based approaches have been abandoned by most software developers. One exception is in cases where they are required to use tools that cannot support lockless approaches; such projects are at serious risk, because they cannot compete with the development tempo of any competing project using a more modern development process.
Would you recommend Subversion software configuration management (SCM) tool to support a new software development program?
Absolutely! I'm currently using Subversion in several software development projects I lead, and it works marvelously well for many projects.
I should note that there are two competing approaches to SCM: centralized SCM and distributed SCM. Centralized SCM is the traditional approach, and Subversion is designed for (and excels in) centralized SCM. Distributed SCM is a worthwhile alternative, but it's very different, and there are many competing tools that support it (including Mercurial, git, Monotone, bazaar-ng, and others; subversion does not support this style well). Among the distributed OSS SCM systems, "git" is the most featureful and is often a performance winner (it's used by the Linux kernel developers), but it has a complicated user interface with a steep learning curve. Mercurial (hg) is a useful alternative that has a much simpler user interface; bazaar-ng and Monotone have their backers too.
Can we review the software dependencies as well? I believe that your answer glosses over the complexity of OSS licensing, and the new responsibilities imposed due to license incompatibility, and related chained licensing obligations?
In theory this is very complex, because there are a vast number of different OSS licenses, and many of them are incompatible with each other. In practice this is typically very easy, because only a few OSS licenses are actually in use, and they tend to be compatible with each other.
Software developers need to make sure that all the components you assemble into a linked executable are compatible. My "FLOSS license slide" can help you determine that. For the components that are pre-linked and sent to users, this is entirely a software developer's problem, and since developers know what libraries they are using, this is usually easy to resolve. Even in cases where dependency is transitive (I develop application A, A depends on library B, and B depends on library C), developers still have to install those components before they can run (and test) them. So developers have plenty of insight into the licenses of their components, to make sure that they stay within the law.
The challenges can come when there are dynamically-linked libraries. The GPL interprets dynamically-linked libraries as being linked into the entire application program, and thus the applications that use GPL'ed libraries must be GPL-compatible.
As long as the libraries do not change their licenses, the licenses should have already been checked out by the original developers... and thus should be fine. So the rule is simple: Don't automatically accept a library upgrade if the license changes. Red Hat's Fedora project explains it this way in their Licensing page: "A license change in a package is a very serious event - it has as many, if not more, implications for related packages as ABI changes do. Therefore, if your package changes license, even if it just changes the license version, it is required that you announce it... Note that any license change... may affect the legality of portions of Fedora as a whole; ergo, FESCo reserves the right to block upgrades of packages to versions with new licenses to ensure [legal distribution]."
If all the libraries stick to the most common OSS licenses, even license changes are often a non-issue. So encourage developers to use those common licenses, because doing so eliminates a lot of problems! As long the applications and their libraries stick to these licenses suites, you're fine:
But yes, there are a few cases where the use of different licenses can be very complicated. Thankfully, they're rare in practice. The major distributors of OSS operating systems have worked hard for years to identify and deal with these issues, and their packaging systems already include license information that enable automated analysis. Over time I expect automated analysis to increasingly help in complicated cases. But the better approach is to use a small list of widely-used compatible OSS licenses. "Keep It Simple, Stupid" is still good advice.
What risks [does] OSS bring for internal auditing?
I'm not sure what you mean by this question, but I'll try. By itself, using OSS is not automatically bad - indeed, the whole point of developing software is to solve a problem, and if OSS helps you solve it, great! In fact, I think that OSS's existence has been (on the whole) more helpful than harmful. This genie cannot be put back in the bottle, anyway; OSS exists, and will exist. Since OSS is part of the global environment, you now need to figure out how to use it as an opportunity and reduce risks where appropriate.
The existence of OSS that is useful, and easily downloadable into projects, does present the opportunity for undesirable events. Indeed, any new technology creates new opportunities for misuse. Here are some potential issues to audit for, and ways to address them:
For both issues, there are ways to recover if you haven't managed your software development properly. You can use tools like comparator to see if a particular OSS program's code has been included in your software. There are also companies (such as Black Duck) that sell code-scanning tools/services to look for OSS code, so that you can make sure that you comply with the licenses and/or patch old code.
You claimed that OSS provides ease of managing licensing inventory issues on desktops and servers. FOSS versus [proprietary] commercial does not reduce your obligation to manage use and implementation. In fact, in many cases, [proprietary] commercial enterprise licenses remove the obligation to track unique use. The difference with FOSS use is that you need to track use regardless of license, due to the fact that the user (aka, govt) accepts the responsibility to manage that as a valuable asset. Without tracking this inventory that is acquired without purchase, it is likely to have software running in the government with unmaintained vulnerabilities, among numerous other issues.
I certainly didn't mean that OSS (aka FLOSS or FOSS) completely eliminates the need to manage your systems. But the kind of management you need to do is different, many of the things you have to do for proprietary software become unnecessary, and for the rest I think they typically become easier.
As I noted earlier, OSS licenses do not impose any conditions on use; by definition, you can "use them for any purpose". What's more, you can freely copy them to as many machines as you'd like to. So the typical "management" tools that prevent copying of software or track use to comply with licenses, aren't legally required.
Now you do want to get the components upgraded when they are patched by the supplier. But the basics of how to do this are relatively simple. All major modern operating systems, proprietary and OSS, come with tools to manage installation and upgrades. Use them, and encourage your suppliers to use them. You may not have to purchase an OSS component, but you can still judge them based on whether or not your source will provide patches, etc. When you get a larger system (which will of course be composed of many smaller components), make sure that your supplier will provide you with patches as those smaller components' vulnerabilities are found and fixed. From a technical point of view, it is better to have the system manage the components at a fine grain, so that it's easier to patch (the patch is much smaller, since you're fixing a small subcomponent) and so that the patched version can be immediately used by all the larger components. But this is not strictly necessary, as long as the large components are fixed by your suppliers in a sufficiently timely manner.
Of course, any patch management process is in some sense a problem. What we want is software that never has a flaw. But such software is in short supply, so we must prepare for the alternative.
Would you comment on legal concerns that the OSS stems from (hidden from understanding) formerly proprietary code, and thus is not OSS?
Experience from the many years OSS has been out suggests that this is extremely unlikely in widely-used OSS projects, and it's extremely unlikely to affect OSS users. Anything in life has some risk, but this is not the risk that proprietary vendors want it to be.
In part, that's because many OSS licenses and projects have mechanisms to try to counter this. The GPL and LGPL specifically state that "You should also get your employer (if you work as a programmer) or school, if any, to sign a 'copyright disclaimer' for the program, if necessary.", and point to additional information. Many projects, particularly the large number of projects managed by the Free Software Foundation (FSF), ask for an employer's disclaimer from the contributor's employer in a number of circumstances. The Linux kernel project requires that a person proposing a change add a "Signed-off-by" tag, attesting that the "patch, to the best of his or her knowledge, can legally be merged into the mainline and distributed under the terms of [the license]. See the Developer's Certificate of Origin, found in Documentation/SubmittingPatches, for the precise meaning of Signed-off-by..." [Linux kernel Documentation/patch-tags]. In addition, a number of projects have engaged in special additional code reviews, specifically to ensure that there is no questionable code. There are even OSS programs, such as comparator, for comparing source code (to help you look for such things).
The primary mechanism which explains why this concern practically never occurs, however, is the publicity of the code - including who made those changes. Any company can review OSS to look for proprietary code that should not be there; it's not even hard, since both the source code and OSS tools to do this are available. A company that found any of its proprietary software in an OSS project could then determine who submitted that code to the OSS project. As a result, anyone who unlawfully copies proprietary software into an OSS project will almost certainly be caught and convicted - a powerful deterrent.
It's an unfortunate fact that in our litigious society, courts are sometimes happy to convict and punish innocent third parties who were simply using materials that they were told are legal, instead punishing the wrongdoer. But many juries are unwilling to punish innocent parties, even when the law (unjustly) says they can, and in any case, the rarity of this kind of copying makes it a very unlikely event. After all, the person who does the copying without permission is taking a much greater risk - they face the near-certainty of getting caught. And for what? It's unlikely that whatever they earn by copying the software into OSS without permission will be enough to compensate for it. Especially given the near-certainty of being caught!
Extensive analysis of the Linux kernel and Linux-based operating systems show that they are remarkably clean. The experience of SCO's in-depth evaluation of the Linux operating system is actually quite instructive, because SCO had a strong financial incentive to find improperly copied code. SCO originally claimed that the Linux kernel copied massive amounts of proprietary code; this played well in the media, but the courts required that SCO actually produce such evidence. Yet SCO's in-house analysis had already determined that there was no such copying. Michael Davidson's email of 13 August 2002 reported that after extensive 4-6 month analysis of the Linux kernel and "a large number of libraries and utilities", "we found absolutely *nothing* [...] no evidence of copyright infringement whatsoever. There is, indeed, a lot of code that is common between UNIX and Linux... but invariably it turned out that the common code was [legitimate]." SCO kept searching, and kept failing to find improperly copied code. "SCO's evidence of copying between Linux and UnixWare" by Greg Lehey discusses a 2003 revelation, where he notes that "SCO's presentation was supposed to prove that Linux is abusing SCO intellectual property. It seems not only to completely and utterly fail in this purpose, but also to show a number of problems within SCO". Since then, SCO's claims have been quickly whittled away. A large company (SCO) bet its life on finding improper copying in OSS projects, and could not do so. That is strong evidence that OSS is remarkably clean of improper material; it's not clear how many large proprietary programs could have withstood such scrutiny.
For another example, The Software Freedom Law Center did an extensive analysis of the Linux Wireless Team's ath5k Driver and gave it a clean bill of health; all was there legitimately.
The risk is primarily in the other direction, i.e., there's a risk that your proprietary software includes OSS in a way that violates the OSS license. OSS by itself is not a risk, per se - using well-tested OSS can reduce development risks considerably! But you have to follow the OSS license, or risk large penalties. Sadly, some developers of proprietary software copy OSS into proprietary software and violate the OSS license. Perhaps some do so from a lack of knowledge. But I believe many developers think that since only the binary (and not the source code) is released, or because it's part of a larger system, they won't get caught. They are grossly mistaken. Projects such as http://gpl-violations.org/ and Zlib-fingerprint are showing that it's quite easy to determine if proprietary software includes OSS. Companies like Black Duck even make money selling services to look for OSS in proprietary products (to make sure the licenses are followed).
In short, the risk of proprietary software being embedded in OSS is small, because the OSS is available for the world to review. In contrast, the risk is larger that OSS will be invalidly embedded in proprietary software, precisely because the proprietary software's source code is not (as easily) available for worldwide review.
What about indemnification or other ways to address legal risks?
Some people seem to think that OSS is too risky without actually looking at the legal situation. If you're worried about legal risks, first see if the risk is really large enough to matter to you; if it is, go to a company that will sell you something to reduce your risks. For example, many major Linux distributions include some sort of program to reduce risks; it's often one of the things you buy when you pay for a Linux distribution. Some companies or consortia also have separate programs to reduce any legal risks from using OSS. Here are a few examples:
This in addition to the steps individual OSS projects make.
Indemnification programs primarily became popular when SCO first launched its legal attacks against Linux-based systems. As it became clear that even SCO's determined efforts to find legal problems in Linux-based systems were failing to do so, the number of people who believe such indemnification programs are needed appears to have also waned. IBM argues that indemnification - at least against SCO - is completely unnecessary. David Berlind has an old but detailed discussion on OSS indemnification. Dana Blankhorn (ZDNet) openly wonders if open source indemnification matters any more; as he notes, he still can't find any case "where an enterprise had to pay a copyright or patent holder because some open source program they were running contained patented or copyrighted material."
What are the risks of not using OSS?
For the U.S. government, the risks of not using OSS commercial software are often the same risks as not using proprietary commercial software. If the U.S. government insists on paying to develop and maintain every line of code itself, it will soon find itself unable to compete with those who do use commercial software, because it simply cannot afford to keep up. Increasingly, commercial software is developed as OSS, so not using OSS where appropriate means that the U.S. government will be unable to do other things it could have done with that money, and will be unable to keep up with the developments of that software.
Sometimes, if the government uses a proprietary COTS package instead of OSS, it can be the best choice; perhaps the proprietary package has many more useful capabilities than the OSS components. But this can risk being locked into the vendor, with the long-term risks of much higher costs, lower functionality, and lower quality. Also, the government has no effective opportunity to make quick changes when necessary, on its timetable; if the software has a serious security breach in the middle of a war, the vendor may not be able or be unwilling to fix it in time. (The supplier may even be the government of the opposing country.) In short, such decisions need to be made on a case-by-case basis; deciding ahead-of-time to not use OSS components would be foolish, because it would avoid much of the best commercial software.
For a contractor, the choice is starker: They risk failing to obtain government contracts - perhaps all of them (including re-competes) over time. That's because a contractor unwilling to use OSS components is likely to be repeatedly beaten by contractors who are willing to use all available commercial components and approaches to win their bid. The OSS-using competitor is likely to be cheaper, they can deliver more value to the customer (because they can customize more extensively to meet the customer's needs), and they will be able to deliver more rights to the customer (including the ability to make emergency changes). A contractor unwilling to use OSS components risks loss of all of government business, as they get out-competed by smarter competitors.
Reuse of solutions for government can be important. How can we be sure that software is reusable? IE: How can governments be sure to acquire reusable software?
What does David Wheeler think about 'gated' DoD Open Source communities? Are they viable?
"Gated source" is the application of OSS approaches to a limited community (instead of the public). Common limitations are "government use only" (some call this GLOSS) or "DoD use only". Inside the government, it boils down to a slightly different way to develop GOTS applications. There have also been various commercial attempts to implement gated source (e.g., where you pay to get in, and then everyone can freely share with other members).
I think that gated source is sometimes a viable sustainment approach, but past efforts show that in most cases, gated source fails. In fact, gated source is a high-risk strategy unless there are countervailing forces. The basic problems are that (1) there are usually not enough participants inside a gated community to make the approach work, and (2) those inside the community have competing reasons to not use or improve the gated source. The reasons are legion. Contractors typically earn more money by redeveloping rather than from reusing, and many contractors are loathe to work on software they can't use for non-government work. Many individual developers won't want to work on it (because in-depth knowledge cannot be transferred to other settings), and more importantly, there will be relatively few developers with such knowledge (because it's only possible inside the gate) - making it hard to get people when they're needed. The biggest reason is also the simplest: the vast majority of developers don't work for government contractors, and thus, you've eliminated the vast majority of sources for cost-sharing and innovation. This problem accelerates itself: A contractor won't want to use a gated source program because very few people are working to improve it, making it a higher risk that it will not meet your needs - but since everyone else inside the gate is likely to make the same decision for the same reasons, it is likely to never gain many developers. OSS has this problem too, but with globalization it's much easier to get enough momentum to become self-sustaining. In contrast, government gated source often never has a chance to reach self-sustainment. While "gated source" processes are often somewhat different than traditional GOTS development processes, they have fundamentally the same limitations - and we already know that traditional GOTS development processes are typically costly to sustain.
So why do people keep talking about a sustainment practice with such a demonstrably high risk of failure? The problem in the U.S. government is that, although the gated source approach has a high risk of failure, contract law and business processes make it a very low risk for individuals to propose and plan. Current FAR and DFARS clauses essentially presume that this is a common approach, and even help you get started on this road. It's fine to make it easy to use this approach; the problem is other approaches (such as OSS) are not as easy to apply in the government, even in cases where the OSS approaches would be less risky. Because under the current regime it's easiest to use gated government approaches, DoD PMs are often misled into ignoring more viable alternatives. We need to fix government processes so that OSS approaches, as well as GOTS and proprietary approaches, can be easily used.
That said, I think that government gated source (aka GLOSS approaches) are a good idea in some circumstances, and should certainly be used when they make sense. If software absolutely cannot be released to the public (e.g., it's classified), has essentially no use or value in the commercial market, and is widely used/useful inside the government, then a government gated community may be a very good approach. If something has value in the commercial market, it's generally a losing battle to try to keep it gated inside the government - even if it's classified. Often a GOTS / gated community approach is more likely to work if the system is broken into a large number of subcomponents, most of which are maintained other ways (e.g., proprietary or OSS COTS) - and then it's just the specific integration, plus a few small yet critical components, that is gated inside the government. We need to work out ways where we can more easily find and establish gated communities, for cases where it is sensible.
Navy's SHARE is an example of a project working to make things better for gated source. I wish them well, and hope they succeed. But it would be madness to expect that they will replace OSS; the problems with government gated source (such as the tiny potential communities and use limitations discouraging contractor use) means that government gated source projects will often fail where an OSS project would have succeeded.
Have you seen good examples of "Gated Source" / "Trusted Source" projects, i.e. open only to an authorized community? Which license suits this?
I'm having trouble thinking of any unqualified successes, though I presume there are some. Which makes my point: The "gated community" strategy is very risky.
AT&T attempted to set up a gated community among universities for Unix. In the end, the strictures of the limited community compelled the universities to rewrite all of Unix to create their own system (this is the origin of the BSD Unixes, and it indirectly produced Linux-based systems as well). AT&T also licensed copies to various companies which let them modify changes and keep them in-house; this would have permitted the creation of a gated community, but instead every vendor created their own incompatible proprietary versions, resulting in no combined project and a market loss for Unix as a whole.
Gated source isn't OSS, so you can't directly use an OSS license to support a gated community. OSS that is permissively licensed could be used as the basis for a gated community (because permissive licenses permit making the software proprietary). If there's an active OSS project that is probably self-defeating; the OSS project will probably speed by the gated project, making all investments in the gated project a waste of money.
How does OSS actually provide for security in military applications (since OSS form and structure is available for pre-analysis and vulnerabilities found)?
I answered this question in my presentation - please see that for more. The short answer is that more people find and report vulnerabilities, allowing them to be fixed.
Have you reviewed the 1/10/08 DHS report whereby it found flaws in 180 open source software projects and questioned whether open source software is actually more secure than proprietary software?
I haven't seen a formal report released by DHS in January 2008 on open source software. I presume you're referring to the press releases by Coverity about their scanning program, which is sponsored by DHS. A few have tried to spin Coverity's work as disparaging to open source software approaches, but that wasn't the point. Indeed, this shows the advantage of OSS approaches; here is a project that is helping to find vulnerabilities in OSS quickly, due to the openness of OSS, resulting in secure programs we can actually use.
As you can tell by articles in InfoWorld, PCWorld, and C|Net, what's happening is that Coverity (a company that sells security scanning software) has a DHS-sponsored program to scan OSS programs for vulnerabilities and report that information to them. Many OSS programs use that information to find and fix problems.
One article tried to spin this negatively, but it's incredibly biased and misleading. That article tries to make it appear that there was doubt that OSS programs could have vulnerabilities, which is utter nonsense. Of course OSS programs have vulnerabilities. The question is, will they be found and fixed fast enough that you won't be exploited by them (because they're found before release, or at least before the attackers find and exploit them)? It also tries to make it appear that "SELinux" was an odd Linux hardening measure only used by governments - yet Red Hat's Red Hat Enterprise Linux and Fedora both use SELinux to harden their systems, and has demonstrated that even many previously-unknown vulnerabilities have been rendered impotent because of it. I'm not the only one who thought this article was awful. Emily Ratliffe labelled this article as "irresponsible journalism" and the "worst piece of yellow journalism that I have seen in quite some time" because it was so biased and misleading.
What's funny is that exactly the same information was interpreted in the opposite way by the widely-respected ZDNet. They reported in "PHP, Perl and Python pass Homeland Security test" that "Coverity, which creates automated source-code analysis tools, announced late Monday its first list of open-source projects that have been certified as free of security defects."
In short, this work by Coverity is a great example of what my presentation discussed. Most proprietary programs are not analyzed by any tool to find vulnerabilities, and if they are, it's usually one small set of people, using at most one or two tools. In contrast, there are many different organizations who examine OSS for vulnerabilities, using a variety of methods and tools, and report back so that the OSS can have those problems removed. What's more, they can do that before the software is released as "ready for normal users". The result is software that's more secure because it's been publicly examined in a variety of ways, including the time before its official release to users.
Most consistent reason given why PMs say OSS is not an attractive option: "with COTS, we have a vendor to blame & hold accountable for Cat I defects, security issues, etc...not so w/ OSS." How do you respond?
That's a non-problem. Many OSS projects do have a supplier or set of suppliers that you can hold accountable. And for those which do not, where it's important, hire someone that you can hold accountable. Problem solved.
Many of these PMs have identified the risks exactly backwards - these PMs don't seem to be using the word "accountable" the way the dictionary defines it. Typically, in a proprietary product, the PMs cannot find out what their security exposure is, nor can they take effective measures if they are exposed - in other words, often PMs cannot hold proprietary vendors accountable. Why would these PMs know when a vulnerability is found? After all, many initial vulnerability reports are sent directly to the vendor (if they're not sent to attackers), and most vendors expressly do not share vulnerability information with customers. And if a proprietary vendor fails to respond immediately to a security vulnerability, exactly what do these users plan to do? Stop using the product and get a refund? Not likely; they're often so locked-in that they cannot switch, and so must accept repeated attacks. The PMs usually do not have the option of fixing the program themselves, either. In short, these PMs usually don't know their circumstances, and in most cases they can't take effective action even if they do. If you cannot know your status, or cannot take action based on your actual status even when you know it, then you are not holding anyone accountable.
Let's give a specific example. In June 2004, Microsoft's Bill Gates announced that they had reduced the time to patch a Windows vulnerability to less than 48 hours. But at the same time, the U.S. CERT was warning people about an Internet Explorer vulnerability so serious that users should consider using other browsers, and ZDNet revealed that the vulnerability had existed for almost 9 months. It appears that this "accountability" achieved by only using proprietary products was not working.
Many people did have an alternative: They could switch to Mozilla's OSS web browser. Many did, and after many years, suddenly Microsoft started working on making their products more secure. But this kind of accountability doesn't depend on the existence of a proprietary vendor at all, it only depends on the existence of competition (and market forces) - in this case an OSS project. No traditional "proprietary vendor" is required for competition. In fact, it was the OSS project that forced the proprietary vendor to be more accountable - not the complaints of the proprietary vendor's customers.
More recently (and broadly), in January 2006 the paper An Empirical Analysis of Software Vendors’ Patching Behavior: Impact of Vulnerability Disclosure was released. It examined the behavior of 325 vendors and 438 unique vulnerabilities, and found that OSS suppliers are 60% faster than proprietary suppliers at responding to vulnerability reports. Let's point at a specific example: Risk report: Three years of Red Hat Enterprise Linux 4 (2008) examines Red Hat Enterprise Linux over 3 years. In 81% of the critical vulnerabilities, Red Hat posted the fix to its "Red Hat Network" within one calendar day of the vulnerability being known to the public, and had an average of 2.0 "days of risk".
There's also a DoD-specific view of this. In wartime, a security vulnerability is not just a "bug" in a program; it is a failure in cyberdefense. If you have the source code, you can change the software to eliminate the weakness in your defense. Without this ability, you are dependent on a supplier to make the change, and the supplier may not have the same urgency you do.
What is lacking is to have a reference framework by which to build and deploy secure systems. A reference framework would allow government to point to something that external contractors build to - also, this would allow us to point to the "secure bit". This also reduces cost by allowing a simple migration to an open framework, while only disclosing secure bits to a few (we know that access to the secure bits could only be had by those contractors that have the appropriate clearances ). This reinforces your prior point, but with a simple reference framework, it would be delivered. Without one, all the secure programs will wait until someone else does it.
This isn't really about OSS at all, and it's difficult to determine exactly what you mean by this question. But I'll try to answer it, and with an OSS slant (since that was the focus of the talk).
The principles for building secure software were first identified by Saltzer and Schroeder, and they're just as true today. I wouldn't call them a "reference framework" exactly, but they're certainly a useful set of principles. There's no need for them to hidden away through the clearance process; they are publicly known. While OSS can better meet one of those principles than proprietary software can, the principles are universal - they apply to OSS COTS, proprietary COTS, and custom software as well.
Now, there's certainly a need for widely-available tools to help develop secure software. Thankfully, there are lots of OSS tools that can help; my paper High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)... with Lots on Formal Methods gives a long list, and my Flawfinder home page lists some other OSS scanning tools for medium assurance.
If by this you mean that we need some sort of high assurance operating system at the bottom of the infrastructure, I agree. It would be perfectly sensible to work to develop an OSS high assurance OS, at least some sort of hypervisor that did isolation between virtual machines. Feel free to do so!
Your desire for a "reference framework" to enable migration might be met by W2COG's GIGlite.org, which is an open on-line environment for Open Technology Development.
How is reliability determined, and how are reliability for OSS validated in each new application?
There is no single universal measure of reliability, though there are a few that are more common than others. My "Look at the Numbers!" paper lists a number of studies that have measured reliability of OSS by various means and measures.
(Comment) Just using the term more reliable just makes the issue more complex - very-very reliable doesn't tell anyone what the risk or hazard is. I guess my point is that just stating that OSS is reliable without delineating what that means is the start of many on-going arguments between customers and developers.
Yes, "reliability" means different things to different people, because people differ on what they consider important. Hopefully that will be the start of on-going discussions and negotiations, not arguments.
What is the status of the Open Technology Development (OTD) efforts? Are there any pilots?
I am not directly part of OTD; you're better off asking them.
What changes are needed in the DoD intellectual property regime to make OSS more widespread??
It's currently possible to use and create OSS under the current intellectual rights regime, but it's quite difficult. It often requires a large deal of additional work to scrutinize contracts, gain an understanding of licensing issues, etc. In addition, there seems to be no official DoD-sanctioned location that explains OSS issues, or aids in the use or development of OSS where appropriate.
In the short term, there needs to be clear explanations on how governments and contractors can use OSS, and release software as OSS. They would need to address various knotty legal questions, such as some of the questions above, in the definitive way that I cannot do. We also need to establish other mechanisms so projects are easy to find, use, start, and work with.
In the long term, we need to change the FAR and DFARS to make use and development of OSS much easier to do. OSS is one of the primary methods for software development in the commercial world, but the current regulations were not designed with them in mind and thus they make it unnecessarily difficult. Under the FAR, it should be much easier for contractors to release software developed using government funds to citizens (who paid for it!) as OSS. In the DFARS, it should be clearer if government permission is required to include commercial components, and if so, make it easier to include OSS COTS. Both the FAR and DFARS need to make it easy for the government to prevent itself from being locked into suppliers without its explicit consent; supplier lock-in often makes it difficult to transition to better alternatives, including OSS alternatives. The government should always have a clear understanding of what rights it has to which software, and have the source code and other materials necessary for it to actually exercise those rights long after the contract has completed. Finally, there need to be incentives, instead of roadblocks, to "do the right thing".
A number of the questions involved legal issues. While it's not focused on government contracts, you might find this general guide useful: "A Legal Issues Primer for Open Source and Free Software Projects" from the Software Freedom Law Center.
Feel free to view David A. Wheeler's personal home page, or return to the page for my February 2008 "OSS and the DoD" webinar page.