“Commercial” is not the opposite of Free-Libre / Open Source Software (FLOSS)
Announcing “Commercial” is not the opposite of Free-Libre / Open Source Software (FLOSS) — a new and hopefully useful essay!
Why this new essay? When I talk with with other people about Commercial Free-Libre / Open Source Software (FLOSS), I still hear a lot of people mistakenly use the term “commercial software” as if it had the opposite meaning of “open source software”, Free-Libre Software, OSS/FS, or FLOSS. That’s in spite of (1) the rise in commercial support for FLOSS, (2) official definitions of “commercial item” that include FLOSS, and (3) FLOSS licenses and projects clearly approving commercial support.
This confusion — that FLOSS and commercial software are opposites — is a dreadful mistake. Speakers who differentiate between FLOSS and commercial products, as if they were opposites, are simply unable to understand what is happening in the software industry. And if you cannot understand something, you cannot make good decisions about it.
If you wish to understand the 21st century (and beyond), you need to understand the basics of what controls software… because software controls everything else.
So, this essay “Commercial” is not the opposite of Free-Libre / Open Source Software (FLOSS) explains why it’s so important to understand that the word “commercial” is not the opposite of FLOSS, and then gives examples to justify the claim.
Enjoy!
path: /oss | Current Weblog | permanent link to this entry
Direct Recording Electronic (DRE) Voting: Why Your Vote Doesn’t Matter
Direct Recording Electronic (DRE) voting machines have been installed in many locations across the United States. In a DRE machine, votes are only recorded electronically — there’s no paper that voters check. DREs can be rigged to forge any election, fairly easily, so DREs are completely inappropriate for use in any serious election. In fact, I suspect that vote-rigging has already occurred with DREs, and there is no way to prove otherwise.
In September 2006, Feldman, Halderman, and Felten posted “Security Analysis of the Diebold AccuVote-TS Voting Machine”, showing how trivial it was to completely control a common DRE voting machine. It turned to be trivial to write programs to do vote-stealing. The manufacturer’s reply didn’t really address the issue at all. A report on the Nedap/Groenendaal ES3B voting computer found that anyone given brief access to a different voting machine can gain complete and virtually undetectable control over election results - and how radio emanations from an unmodified ES3B can tell who voted what from several meters away.
On the Secure Coding mailing list (SC-L), Jeremy Epstein noted that election officials’ responses was “amusing and scary”; when shown that DREs could be trivially subverted, instead of forbidding the use of DREs, they ignored the problem and asked why the researchers didn’t attack real systems. That’s a foolish question - anyone who really wanted to control an election would just do it and not tell anyone. The manufacturer of that system claims that all the problems reported by the researchers have been ‘fixed’. I’m willing to believe that some elections were fixed, but if there’s no voter-verifiable paper trail, the machines are not appropriate for real elections. Since they lack a voter-verifiable paper trail, no DRE can be trusted. Period.
I used to do magic tricks, and all magic tricks work the same way - misdirect the viewer, so that what they think they see is not the same as reality. Many magic tricks depend on rigged props, where what you see is NOT the whole story. DREs are the ultimate illusion - the naive think they know what’s happening, but in fact they have no way to know what’s really going on. There’s no way to even see the trap door under the box, as it were… DREs are a great prop for the illusion. Printing “zero” totals and other stuff looks just like a magic show to me - it has lots of pizazz, and it distracts the viewer from the fact that they have no idea what’s really going on.
I’m of the opinion that elections using DREs have already been manipulated. No, I can’t prove that an election has been manipulated, and I certainly can’t point to a specific manufacturer or election. And I sincerely hope that no elections have been manipulated. But there’s a lot of money riding on big elections, and a small fraction of that would be enough to tempt someone to do it. And many people strongly believe in their cause/party, and might manipulate an election on the grounds that it’s for the “greater good” - it need not be about money at all.
It’s crazy to assume that absolutely no one’s subverted a DRE in an election, when it’s so easy and the systems are known to be weak. The whole problem is that DRE designs make it essentially impossible to detect massive fraud, almost impossible to find the perpetrator even if you detected it, and allow a single person to control an entire election (so there’s little risk of a “squeeler” as there is with other techiques to subvert elections). And if an unethical person knows they won’t be caught, it increases the probability of them doing it. Anyone who thinks that all candidates and parties are too honest to do this needs to discover the newspaper and history books. Ballot-stuffing is at least as ancient as ancient Greece, and as modern as Right Now.
These voting systems and their surrounding processes would not meet the criteria for an electronic one-armed bandit in Las Vegas. Yet there’s more at stake. Many people have motives for subverting elections - DREs provide the method and opportunity. The state commissions cannot provide any justifiable evidence that votes are protected from compromise if they use DREs. And that is their job.
For more information about the problems with DREs, see Frequently Asked Questions about DRE Voting Systems. Another interesting article is Bruce Schneier’s “The Problem with Electronic Voting Machines”
There’s a solution, and that’s verified voting - see the verified voting site. The Verified Voting Foundation advocates the use of voter-verified paper ballots (VVPBs) for all elections (so voters can inspect individual permanent records of their ballots before they are cast and so meaningful recounts may be conducted), insists that electronic voting equipment and software be open to public scrutiny, and that random, surprise recounts be conducted on a regular basis to audit election equipment. I would add at least three things: (1) there must be separate voting stations and ballot readers, where the ballot reader totals are the only official votes (this prevents a collusion by the voting station), and (2) there should a standard paper ballot format; this makes it possible to have independent recounts using equipment from different manufacturers, as well as making it possible to mix-and-match vendor equipment (lowering costs for everyone); (3) there should a standard electronic formats for defining elections and producing results, again to make it possible to dramatically reduce costs by enabling mixing and matching of equipment. I also think having 100% of the source code of these systems publicly available for inspection is important - the public must depend on these systems, so the public should be able to know what they are depending on. The Open Voting Consortium (OVC) is a non-profit organization dedicated to the development, maintenance, and delivery of open voting systems for use in public elections. OVC is developing a reference version of free voting software to run on very inexpensive PC hardware, which produces voter-verifiable paper ballots.
I hope that election officials will see the light, and quickly replace DREs with voting systems that could actually be trusted. If not, I think we’re headed for election disputes that will make the year 2000 disputes look like like a picnic. If election officials don’t get rid of DREs, sooner or later we will have an election where one candidate wins even though all the polls will say he/she lost… and then the courts will find out that they’re untrustworthy and do not permit any kind of real audit or recount.
DREs are unfit for use in any elections that matter. They should be decommissioned with prejudice, and frankly, I’d like to see laws requiring vendors to take them back and give their purchasers a refund, or add voter-verified paper systems acceptable to the customer at no charge. (As I noted earlier, the paper needs to meet some standard too, so that you can use counting machines from different manufacturers to prevent collusion.) At no time was this DRE technology appropriate for use in voting, and the companies selling them would have known better had they done any examination of their real requirements. The voters were given a lemon, and they should have the right to get their money back.
path: /security | Current Weblog | permanent link to this entry
Planet definition problems, and Pluto too
Well, at the last minute International Astronomical Union (IAU) changed their proposed definition of the term “planet”, and voted on it. Pluto is no longer a planet!
Well, maybe.
Usually definitions like this go through a lot of analysis; I think this one was rushed through at the last minute. I see three problems: It was an irregular vote, it’s vague as written, and it doesn’t handle faraway planets well. Let’s look at each issue in turn.
This was a pretty irregular vote, I think. As I noted, at the last minute the proposal changed, with no time for deeply examining it. There were 2,700 attendees, but only 424 astronomers (about 10%) voted on the proposal that “demoted” Pluto. And only a few days after the vote, 300 astronomers have signed a petition saying they do not accept this IAU definition - almost as many as voted in the first place. That doesn’t sound like consensus to me.
More importantly, it’s too vague. That’s not just my opinion; Space.com notes that there’s a lot of uncertainty about it. Now a planet has to control its zone… but Earth doesn’t, there are lots of objects that cross Earth’s orbit. Does this mean that Earth is not a planet? I haven’t seen any published commentary on it, but I think there’s even a more obvious problem - Neptune is clearly not a planet, because it hasn’t cleared out Pluto and Charon. A definition which is that vague is not an improvement.
But in my mind, the worst problem with this definition is a practical one: it doesn’t handle other planets around other stars well. We are too far away to observe small objects around other stars, and I think we will always be able to detect larger objects but not smaller ones in many faraway orbits. So when we detect an object in another galaxy with the mass of Jupiter, and it’s orbiting a star, is it a planet? Well, under this current definition we don’t know if it’s a planet or not. Why? Because we may not be able to know what else is there in orbit. And that is a real problem. I think it’s clear that we will always be able to observe some larger objects without being able to detect the presence of smaller ones. If we can’t use the obvious word, then the definition is useless - so we need a better definition instead.
I thought the previous proposal (orbits a star, enough mass to become round) was a good one, as I noted earlier in What’s a planet? Why I’m glad there’s an argument. I think they should return to that previous definition, or find some other definition that is (1) much more precise and (2) lets us use the term “planet” in a sensible way to discuss large non-stars that are orbiting faraway stars. Whether Pluto is in, or not. Of course, none of this affects reality; this is merely a definition war. But clear terminology is important in any science.
I still think that what’s great about this debate is that it has caused many people to discuss and think about what’s happening in the larger universe, instead of focusing on the transient. That is probably the most positive result of all.
path: /misc | Current Weblog | permanent link to this entry
GPL, BSD, and NetBSD - why the GPL rocketed Linux to success
Charles M. Hannum (one of the 4 originators of NetBSD) has posted a sad article about serious problems in the NetBSD project, saying “the NetBSD Project has stagnated to the point of irrelevance.” You can see the article or an LWN article about it.
There are still active FreeBSD and OpenBSD communities, and there’s much positive to say about FreeBSD and OpenBSD. I use them occasionally, and I always welcome a chance to talk to their developers - they’re sharp folks. Perhaps NetBSD will partly revive. But systems based on the Linux kernel (“Linux”) absolutely stomp the *BSDs (FreeBSD, OpenBSD, and NetBSD) in market share. And Linux-based systems will continue to stomp on the *BSDs into the foreseeable future.
I think there is one primary reason Linux-based systems completely dominate the *BSDs’ market share - Linux uses the protective GPL license, and the *BSDs use the permissive (“BSD-style”) licenses. The BSD license has been a lot of trouble for all the *BSDs, even though they keep protesting that it’s good for them. But look what happens. Every few years, for many years, someone has said, “Let’s start a company based on this BSD code!” BSD/OS in particular comes to mind, but Sun (SunOS) and others have done the same. They pull the *BSD code in, and some of the best BSD developers, and write a proprietary derivative. But as a proprietary vendor, their fork becomes expensive to self-maintain, and eventually the company founders or loses interest in that codebase (BSD/OS is gone; Sun switched to Solaris). All that company work is then lost forever, and good developers were sucked away during that period. Repeat, repeat, repeat. That’s enough by itself to explain why the BSDs don’t maintain the pace of Linux kernel development. But wait - it gets worse.
In contrast, the GPL has enforced a consortia-like arrangement on any major commercial companies that want to use it. Red Hat, Novell, IBM, and many others are all contributing as a result, and they feel safe in doing so because the others are legally required to do the same. Just look at the domain names on the Linux kernel mailing list - big companies, actively paying for people to contribute. In July 2004, Andrew Morton addressed a forum held by U.S. Senators, and reported that most Linux kernel code was generated by corporate programmers (37,000 of the last 38,000 changes were contributed by those paid by companies to do so; see my report on OSS/FS numbers for more information). BSD license advocates claim that the BSD is more “business friendly”, but if you look at actual practice, that argument doesn’t wash. The GPL has created a “safe” zone of cooperation among companies, without anyone having to sign complicated legal documents. A company can’t feel safe contributing code to the BSDs, because its competitors might simply copy the code without reciprocating. There’s much more corporate cooperation in the GPL’ed kernel code than with the BSD’d kernel code. Which means that in practice, it’s actually been the GPL that’s most “business-friendly”.
So while the BSDs have lost energy every time a company gets involved, the GPL’ed programs gain every time a company gets involved. And that explains it all.
That’s not the only issue, of course. Linus Torvalds makes mistakes, but in general he’s a good leader; leadership issues are clearly an issue for some of the BSDs. And Linux’s ability early on to support dual-boot computers turned out to be critical years ago. Some people worried about the legal threats that the BSDs were under early on, though I don’t think it had that strong an effect. But the early Linux kernel had a number of problems (nonstandard threads, its early network stack was terrible, etc.), which makes it harder to argue that it was “better” at first. And the Linux kernel came AFTER the *BSDs - the BSDs had a head start, and a lot of really smart people. Yet the Linux kernel, and operating systems based on it, jumped quickly past all of them. I believe that’s in large part because Linux didn’t suffer the endless draining of people and effort caused by the BSD license.
Clearly, some really excellent projects can work well on BSD-style licenses; witness Apache, for example. It would be a mistake to think that BSD licenses are “bad” licenses, or that the GPL is always the “best” license. But others, like Linux, gcc, etc., have done better with copylefting / “protective” licenses. And some projects, like Wine, have switched to a protective (copylefting) license to stem the tide of loss from the project. Again, it’s not as simple as “BSD license bad” - I don’t think we fully understand exactly when each license’s effects truly have the most effect. But clearly the license matters; this as close to an experiment in competing licenses as you’re likely to get.
Obviously, a license choice should depend on your goals. But let’s look more carefully at that statement, maybe we can see what type of license tends to be better for different purposes.
If your goal is to get an idea or approach widely used to the largest possible extent, a permissive license like the BSD (or MIT) license has much to offer. Anyone can quickly snap up the code and use it. Much of the TCP/IP code (at least for tools) in Windows was originally from BSD, I believe; there are even some copyright statements still in it. BSD code is widely used, and even when it isn’t used (the Linux kernel developers wrote their own TCP/IP code) it is certainly studied. But don’t expect the public BSD-licensed code to be maintained by those with a commercial interest in it. I haven’t noticed a large number of Microsoft developers being paid to improve any of the *BSDs, even though they share the same code ancestries in some cases.
If your goal is to have a useful program that stays useful long-term, then a protective (“copylefting”) license like the LGPL or GPL licenses has much to offer. Protective licenses force the cooperation that is good for everyone in the long term, if a long-term useful project is the goal. For example, I’ve noticed that GPL projects are far less likely to fork than BSD-licensed projects; the GPL completely eliminates any financial advantage to forking. The power of the GPL license is so strong that even if you choose to not use a copylefting license, it is critically important that an open source software project use a GPL-compatible license.
Yes, companies could voluntarily cooperate without a license forcing them to. The *BSDs try to depend on this. But it today’s cutthroat market, that’s more like the “Prisoner’s Dilemma”. In the dilemma, it’s better to cooperate; but since the other guy might choose to not cooperate, and exploit your naivete, you may choose to not cooperate. A way out of this dilemma is to create a situation where you must cooperate, and the GPL does that.
Again, I don’t think license selection is all that simple when developing a free-libre/open source software (FLOSS) program. Obviously the Apache web server does well with its BSD-ish license. But packages like Linux, gcc, Samba, and so on all show that the GPL does work. And more interestingly, they show that a lot of competing companies can cooperate, when the license requires them to.
path: /oss | Current Weblog | permanent link to this entry
What’s a planet? Why I’m glad there’s an argument
The International Astronomical Union (IAU) has a proposed definition of the term “planet” that is currently being vigorously debated. I like the definition, and I’m even more delighted that there’s a vigorous discussion about the proposed definition.
We’ve used the term “planet” for thousands of years, without trouble. But now that we’re learning more about the heavens, we now know about many objects orbiting other stars, and about many more objects orbiting our own Sun. As a result, our simple intuitions aren’t enough. Obviously objects will orbit other objects no matter what we call them, but since we want humans to be able to communicate with each other, it’s very important that we have definitions that help rather than hinder communication.
The latest proposed definition is actually very sensible. Basically (and I’m paraphrasing here), it defines a planet as an object that (1) orbits a star, and (2) has enough mass that its gravity can make itself “round”. The real definition has some clever nuances that make this workable. For example, Saturn rotates so fast that it bulges, but since it has enough mass to make it round, it’s clearly a planet by this definition. Also, if objects orbit each other and their center of gravity isn’t inside any of them, then they’re all planets - so both Pluto and Charon become planets under this definition. The object “2003 UB313” (unofficially called Xena) would be recognized as a planet as well, as would Ceres.
I think this definition is very sensible, because it’s based on observable basic physical properties, which are at least somewhat less arbitrary. One previous proposal was “orbits a star and is at least as big as Mercury” - which makes the “Pluto isn’t a planet” group happy, but is incredibly arbitrary. Another approach, which I happened to prefer before this proposal surfaced, is “orbits a star and is at least as big as Pluto” - which makes the “Pluto is a planet” group (like me) happy, and causes fewer changes to textbooks, but it’s still really arbitrary. All such definitions are a little arbitrary, but this new proposed definition is at least somewhat less arbitrary - this definition emphasizes a fundamental physical characteristic of the object. Namely, it has enough gravity to force a change in its own shape.
Some astronomers have complained that this proposed definition doesn’t account for “how the planets were formed”; I think this is nonsense. Humans weren’t around to watch the planets form; our current theories about planet formation may be grossly mistaken. In fact, I think it’s almost certain we’re wrong, because we can’t observe much about planets of other stars to verify or debunk our current theories. And what’s worse, since we can’t really observe much about objects orbiting other stars, we can’t really know if they’re planets based on some “formation” definition - because we can’t get enough data to figure out their ancient history. And here’s a funny thought experiment - someday we may be able to create planets ourselves. Yes, that’s not exactly likely to be soon, but it’s a great thought experiment. If we create a planet, is it a planet? It should be. Any definition like “planet” should be based on its obvious observable properties, not on best guesses (likely wrong!) about formation events of long ago. A definition that uses only unimpeachable data is far more useful, and when observing farway objects we get very little information.
No one wants to claim that every pebble orbiting a star is a planet — our intuition says that there’s something fundamentally “big” about a planet that makes it a planet. I asked an 11-year-old what made a planet, a planet. Her first answer, before hearing about this argument, was “it’s round” (!). While that’s not a scientific survey, it does suggest that the creators of this definition really are on to something - kids can intuitively understand (and even guess at!) this definition.
Are there weird things about this definition? Sure! Charon becomes a planet too, as I noted above. Since our own moon is slowly moving away, in a few billion years our moon might become a planet (assuming the Earth and Moon don’t get destroyed by the Sun first). I guess you could “fix” the definition for those two cases by saying that the “most massive” object of a group was the planet, but I don’t see the need for this “fix”. In fact, you get an interesting insight if your definition forces you to note that their centroid isn’t inside any of them. The object Ceres, now considered an unusually large asteriod, becomes a planet. That’s okay, Ceres was originally considered a planet when it was discovered - it even has a planetary symbol. It’ll make people rewrite the textbooks, but we’ve learned so much recently that they need rewriting anyway. It’ll make it easy to see which books are obsolete - they’re the ones which say we have only 9 planets.
What’s really great about this debate is that it’s making people think about the heavens. Any definition of “planet” is in some ways arbitrary, frankly. I think this essential arbitrariness makes such definitions the hardest things to agree on in science, because there’s no way that more observations can prove or disprove a theory. This definition is much less arbitrary than others people have come up with, because it focuses on an important “change in state” of the object. And that’s a pretty good reason to endorse this definition.
In any case, this debate has caused many people to discuss and think about what’s happening in the larger universe, instead of focusing on the transient. And that is probably the most positive result of all.
path: /misc | Current Weblog | permanent link to this entry
Ohloh, SLOCCount, and evaluating Free-libre / Open Source Software (FLOSS)
A new start-up company named Ohloh has recently appeared, and is mentioned in many articles such as those from Ars Technica, eWeek, and ZDNet. This start-up will do analysis of free-libre / open source software (FLOSS) projects for customers, as well as do analysis of in-house proprietary software. To do the analysis, they’ll use a suite of FLOSS tools. Some articles suggest that they’ll also help customers determine which FLOSS programs best fit their needs, by basing their recommendations on this analysis (at least in part). Exactly what this start-up will do is hard to figure out from the news reports. That’s understandable… this is a start-up, and no doubt the start-up itself will adjust what it does depending on who shows up as customers. But the general niche they’re trying to fill seems clear enough.
This seems like a very reasonable business idea. Indeed, companies like IBM, Sun, and HP have been helping customers select programs and develop systems for a long time, and they often recommend and help transition customers to FLOSS programs. IBM has said they invested over a billion dollars in Linux, and have already recouped it, so IBM shows that this can be a lucrative business model. Ohloh will have some stiff competition if they want to muscle into the job of giving recommendations, but I love to see competition; hooray! And I have stated for years that I’d love to see more analysis of FLOSS programs, so I’m happy that it’s happening.
I am amused, though. Some of the articles about this start-up seem to suggest that this kind of data was impossible to get before. Yet after a quick web search, you’ll discover that a lot of what they discuss is already here on my website, and has been for years. The article in Ars Technica says Ohloh intends to “analyze open-source software projects and provide customers with detailed information about them, including how much it would cost to duplicate the project given an average programmer salary of US$55,000 per year. The Linux kernel, for example, clocked in at nearly 4.7 million lines of code, has had 1,434 man-years of coding effort put in so far, and would have cost approximately US$79 million in salaries.” I’m all for analyzing code, but to do that kind of analysis, you can just run my program SLOCCount. You can even see my papers discussing the results of SLOCCount applied to the Linux kernel, or my More than a Gigabuck paper (which analyzed a whole Linux distribution). In fact, I’ll bet that they are using SLOCCount as one of their tools, even though it appears to be uncredited (shame on you!). After all, they state that they’re using FLOSS programs to do their analysis, and some of the reports they’re generating include exactly the kind of data that only SLOCCount provides. This is not a license problem - I’m glad they’ve found it useful! And to be fair, they’re not just using SLOCCount; they’re also gathering other data such as check-in rates, and they’re grouping the results as a nice prepackaged web page. There are other tools that do some similar analysis of project activity, of course, such as CIA. Still, I think making information like this more accessible is really valuable… so good show!
If they plan to go beyond number-generating, and move into the business of giving specific advice, you could do the same thing yourself. Just go read my paper, How to Evaluate Open Source Software / Free Software Programs. That paper outlines steps to take, and the kind of information to look for. Doing things well is much harder than just following some process, of course, but at least you’ll have an idea of what should be done.
So it turns out that the basic tools, and basic approaches, for doing this kind of analysis and making recommendations already exist. Is that a problem for this start-up? Not really. Not every company has the skills, knowledge, or time to do these kinds of analyses. It’s quite reasonable to hire someone who specializes in gathering a particular kind of knowledge or doing a particular kind of analysis… people who work in one area tend to get good at it. I don’t know how well they’ll do, and execution always matters, but their idea seems reasonable enough.
Frankly, what’s more interesting to me is who’s starting the company - it’s basically lots of former Microsoft folks! It’s headed by two former Microsoft executives: Scott Collison (former director of platform strategy at Microsoft) and Jason Allen (a former development manager for XML Web Services). Investors include Paul Maritz (who was a member of the Microsoft executive committee and manager of the overall Microsoft company from 1986 to 2000) and Pradeep Singh (who spent nine years at Microsoft in various management positions). Years ago, Microsoft’s Halloween documents revealed their deep concern about FLOSS, and how they were going to try to bury it. Hmm, it doesn’t seem to be very buried. I’ve no idea where this will end up, but it sure is interesting.
path: /oss | Current Weblog | permanent link to this entry
The Wisdom of Crowds and Free-Libre / Open Source Software
I just came across an interesting short essay by Dr. Les Hatton titled “Open source inevitably good”; it appears it was published in the July 2005 IT week. He has some intriguing conclusions about free-libre / open source software (FLOSS).
But first, a little about Dr. Hatton, to show that he is no lightweight. Dr. Hatton holds the Chair in Forensic Software Engineering at the University of Kingston, UK, he is a fellow of the British Computer Society, and was voted in “World’s leading Scholars of Systems and Software Engineering (1993-2002)” by U.S. Journal of Systems and Software. His work in computer science has primarily been in the field of software failure, especially the design and execution of experiments to determine the cause and reduce the likelihood of failure in software systems. He’s particularly known for his work on safer language subsets, such as “Safer C”. One paper of his I especially like is “EC—, a measurement based safer subset of ISO C suitable for embedded system development” - in this, he measures the common mistakes made in C by professional developers, and then proposes simple rules to reduce their likelihood (if you write software in C, it’s definitely worth reading). In any case, here is someone who understands software development, and in particular has carefully studied why software fails and how to prevent such failures in the future.
In his essay “Open source inevitably good”, Hatton starts by first examining James Surowiecki’s interesting book “The Wisdom of Crowds: Why the Many are smarter than the Few”. It turns out that crowds working together regularly beat the experts; there’s both good evidence for this (with legions of examples), and good mathematical underpinnings justifying this too. For this to happen, two simple conditions must be met: they must all have some knowledge, and they must act effectively independently.
He notes that while the “many eyeballs” theory of Raymond still operates, this “wisdom of the crowds” also has a strong effect. In short, FLOSS software development often appears chaotic because much of it uses a “survival of the fittest” development approach; several different ideas are tried, and then the most successful approach is selected by many others. When viewed through the lens of the “wisdom of crowds”, this is an entirely sensible thing to do. He concludes this startling way: “High quality open source isn’t a surprise, it’s inevitable.”
Obviously, there has to be a crowd for this concept to hold. But there are many FLOSS projects where it’s obvious that there is a crowd, and where the results are really very good. So take a peek at Les Hatton’s “Open source inevitably good”. It’s an interesting and provocative piece that will make you think.
path: /oss | Current Weblog | permanent link to this entry
Lisp-based programming languages normally represent programs as s-expressions, where an operation and its parameters are surrounded by parentheses. The operation to be performed is identified first, and each parameter afterwards is separated by whitespace. So the traditional “2+3” is written as “(+ 2 3)” instead. This is regular, but most people find this hard to read. Here’s a longer example of an s-expression - notice the many parentheses and the lack of infix operations:
(defun factorial (n) (if (<= n 1) 1 (* n (factorial (- n 1)))))
I think there’s a small resurging interest in Lisp-based systems, because Lisp is still very good at “programs that manipulate programs”. The major branches of Lisp (Common Lisp, Scheme, and Emacs Lisp) have not disappeared, after all. And I recently encountered a very cool and very new language in development, BitC. This language was created to write low-level programs (e.g., operating system kernels and real-time programs) that are easy to mathematically prove correct. I learned about this very cool idea while writing my paper High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)… with Lots on Formal Methods. BitC combines ideas from Scheme, ML, and C, but it’s represented using s-expressions because it’s easy to manipulate program fragments that way. I don’t know how well it’ll succeed, but it has a good chance; if nothing else, I don’t know of anyone who’s tried this particular approach. The program-prover ACL2 uses Common Lisp as a basis, for the same reason: program-manipulating programs are easy. The FSF backs guile (a Scheme dialect) as their recommended tool for scripting; guile gives lots of power in a small package.
But many software developers avoid Lisp-based languages, even in cases where they would be a good tool to use, because most software developers find s-expressions really hard to read. S-expressions are very regular… but so is a Turing machine. They don’t call it ‘Lots of Irritating Superfluous Parentheses’ for nothing. Even if you can read it, most developers have to work with others. Some people like s-expressions as they are - and if so, fine! But many others are not satisfied with the status quo. Lots of people have tried to create easier-to-read versions, but they generally tend to lose the advantages of s-expressions (such as powerful macro and quoting capabilities). Can something be done to make it easy to create easier-to-read code for Lisp-like languages - without spoiling their advantages?
I think something can be done, and I hope to spur a discussion about various options. To get that started, I’ve developed my own approach, “sweet-expressions”, which I think is actually a plausible solution.
A sweet-expression reader will accept the traditional s-expressions (except for some pathological cases), but it also supports various extensions that make it easier to read. Sweet-expressions are automatically translated into s-expressions, so they lose no power. Here’s how that same program above could be written using sweet-expressions:
defun factorial (n) ; Parameters can be indented, but need not be if (n <= 1) ; Supports infix, prefix, & function <=(n 1) 1 ; This has no parameters, so it's an atom. n * factorial(n - 1) ; Function(...) notation supported
Sweet-expressions add the following abilities:
[+-\*/<>=&\|\p{Sm}]{1-4}|\:
I call this combination “sweet-expressions”, because by adding syntactic sugar (which are essentially abbreviations), I hope to create a sweeter result.
For more information on sweet-expressions or on making s-expressions more readable in general, see my website page at http://www.dwheeler.com/readable. For example, I provide a sweet-expression reader in Scheme (under the MIT license), as well as an indenting pretty-printer in Common Lisp. In particular, you can see my lengthy paper about why sweet-expressions do what they do, and some plausible alternatives. You can also download some other implementation code.
I’ve set up a SourceForge project named “readable” to discuss options in making s-expressions more readable, and to distribute open source software to implement them (unimplemented ideas don’t go far!). I will probably need to work on other things for a while, but since I had this idea, I thought it’d be a good idea to write the idea and a quick sample demo of it, so that others could build on top of it. There hasn’t a single place for people to discuss how to make s-expressions more readable.. so now there is one. There are a lot of smart people out there; giving like-minded parties a place to discuss them is likely to produce something good. If you’re interested in this topic, please visit/join!
path: /misc | Current Weblog | permanent link to this entry
If you want to learn something, study what the masters do. To me that seems obvious, and yet many don’t do it. Perhaps we simply forget. So let me inspire you with a few examples…
I just got an advance copy of David Shenk’s “The Immortal Game: A history of chess” - and I’m referenced in it! Which is an odd thing; I don’t normally think of myself as a chess commentator. But I do like the game of chess, and one of my key approaches to getting better is simple: Study the games of good players. I’ve even posted a few of the games with my comments on my web site, including The Game of the Century (PGN/Text), The Immortal Game (PGN/Text), The Evergreen Game (PGN/Text), and Deep Blue - Kasparov, 1996, Game 1 (PGN/Text). It’s my Byrne/Fischer writeup that was referenced in Shenk’s book. But I didn’t create that stuff for a book, originally. I can’t play like these great players can, but I get better by studying what they do. In short, I’ve found that I must study the work of the masters.
There are many children’s educational philosophies that have, at least in part, the notion of studying good examples as part of education. Ruth Beechick’s “natural method” for teaching writing emphasizes starting by copying and studying examples of great writing. She even notes Jack London and Benjamin Franklin started by studying works they admired. Learning begins by studying the work of the masters.
I often write about free-libre/open source software (FLOSS). In part, I do because it’s one amazingly interesting development. But there are other reasons, too. Some developers of FLOSS programs are the best in the business - you can learn a lot by seeing what they do. In short, one important advantage of FLOSS is that it is now possible for software developers to study the work of the masters.
I recently wrote the article High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)… with Lots on Formal Methods (aka high confidence or high integrity) (I gave it the long title to help people find it). Here, I note the many tools to create high assurance software - but there are precious few FLOSS examples of high assurance software. True, there are very few examples of high assurance software, period, but where are the high assurance software components that people can study and modify without legal encumberances? (If you know of more, contact me.) That worries me; how are we supposed to educate people how to create high assurance software, if students never see it? People do not wake up one morning and discover that they are an expert. They must learn, and books about a topic are not enough. They must study the work of the masters.
path: /misc | Current Weblog | permanent link to this entry
How to read the mysterious Winmail.dat / Part 1.2 files (TNEF)
All too often nowadays people report that they “can’t open the attachment” of an email, because they only received a file named (typically) “Part 1.2” or “Winmail.dat”.
The basic problem is that in certain cases Microsoft Outlook uses a nonstandard extra packaging mechanism called “ms-tnef” or “tnef” when it sends email - typically when it sends attachments. What Outlook is supposed to do is simply use the industry standards (such as MIME and HTML) directly for attachments, but Outlook fails to do so and adds this other nonsense instead. The full name of the format is “Transport Neutral Encapsulation Format”, but that is a misleading name… it may be neutral on transport, but it obstructs reception.
Almost no other email reader can read this nonstandard format. Email clients that can’t (currently) read this format include Lotus Notes, Thunderbird / Netscape Mail, and Eudora. In fact, I’ve been told that even Microsoft’s own Outlook Express can’t read this format!
So take a look at my new article, Microsoft Outlook MS-TNEF handling (aka Winmail.dat or “Part 1.2” problem of unopenable email attachments). It gives you a brief explanation of the problem, and what to do about it, both from the sender view (how can I stop sending unopenable email?) and the receiver view (how can I read them anyway?).
path: /misc | Current Weblog | permanent link to this entry
Sony Playstation 3: Train wreck in process
I try to keep up with the general gaming business. Many of the best new computer hardware technologies first show up in the gaming world, for one thing. And for another, I once in the business; in the mid-1980s I was lead software developer/maintainer for the first commercial multiplayer role-playing game in the U.S., Scepter of Goth. (Full disclosure: I didn’t write the original, I maintained it after it had been initially developed. Scepter may have been the first commercial multiplayer RPG in the world, but I have never gotten good-enough data to show conclusively if MUD or Scepter were first. Bartle’s MUD was clearly first in the UK, and Scepter was clearly the first in the US, and neither knew of the other for a long time.) I also wrote some videogames for the Apple ][, which I sold. (I still play occasionally, but my hand/eye coordination is awful; my brother had to playtest them, since I couldn’t get far in my own games.) I generally hope for good competition, since that is what keeps the the innovation flowing and the prices down. My hopes are getting dashed, because Sony seems to have had a full lobotomy recently.
If Sony is trying to go (mostly) out of business, it’s got a great process going. Recently about half of Sony’s income has depended on the Playstation 2, so you’d think that they would avoid bone-headed decisions that would doom them in the market as they release their next-generation console.
But the Sony Playstation 3 will come with an outrageous pricetag: starting at $599 (or $499 for a stripped-down version). Home video-game consoles have sold for $199 to $299 traditionally, and the X-Box 360 (its primary competitor) costs much less than this announced price too.
Why so much? One significant reason is because Sony is including a Blu-ray reader, a proprietary video format that they hope will replace DVDs; this is both raising the price substantially and appears to be delaying shipment. Didn’t Sony learn its lesson from Betamax, their earlier costly blunder in the videotape format war? No, it appears that Sony must go out of business to learn. Betamax was supposed to be better technically (and it was in some ways), but it cost much more. In part, the higher cost was due to the lack of competing suppliers; the competing VHS market was full of competing suppliers who quickly marched past the proprietary format. Sony has even lost big money on other proprietary formats, too. Blu-Ray has all the same earmarks of a failure, in exactly the same way. The Playstation 3 will have a hopelessly high price tag because of Blu-ray, and it looks like the Playstation 3 will go down with it. Since both Blu-Ray and its competitor HD-DVD have really more egregious digital restrictions management (DRM) mechanisms built in, I hope both fail - their improvements frankly don’t justify abandoning DVDs in my eyes.
Ah, but the higher price tag implies better performance, right? Wrong. The Inquirer reports that there are some serious technical flaws in the Playstation 3, The Playstation 3 will have half the triangle setup capability compared to Xbox 360. What’s worse, its local cell memory read speed is about 1/1000th of the speed it should be getting. In fact, one slide describing the Playstation 3 performance had to say “no that isn’t a typo”, because the performance figures on this fundamental subsystem are so horrifically bad. So people will have the option of spending a lot more money for a less capable machine that is saddled with yet another failed proprietary format. And in addition, Sony is already really late with its next-gen console; if you’re not first, you need to be better or cheaper, not obviously worse and more expensive. Yes, it’ll run Linux, but I can run Linux very well on a general-purpose computer system, for less money and without the hampering I expect from Sony.
Has greed disabled Sony’s ability to think clearly? The Sony-BMG DRM music CD scandal, where Sony subverted a massive number of computers through a rootkit on its music CDs, just led to a big settlement. Granted, it could have been worse for Sony; under the laws of most countries, many Sony executives should probably be in jail. In the Sony-BMG case, Sony tried to force a digital restrictions management (DRM) system on users by breaking into their customers’ operating systems. The point of DRM systems is to prevent you from using copyrighted products in ways the company doesn’t approve of — even if they are legal (!). Hrmpf. The All Party Parliamentary Internet Group (APIG) in the UK recommended the publication of “guidance to make it clear that companies distributing Technical Protection Measures systems in the UK would, if they have features such as those in Sony-BMG’s MediaMax and XCP systems, run a significant risk of being prosecuted for criminal actions.” It’s fine to want money, but it’s wise to make money by making a good product — one that is cheaper or better in some way. “Get rich quick” schemes, like rootkitting your customers to keep them from doing stuff you don’t like, or trying to establish proprietary format locks so everyone has to go to you, often backfire.
What’s weird is that this was all unnecessary; it would have been relatively easy for Sony to create a platform with modern electronics that had much better performance, worth paying for, without all this. It would have been much less risk to Sony if they’d taken a simpler route. What’s more, their market share is so large that it was theirs to keep; they just had to be smart about making a good follow-on product.
Maybe Sony will pull things through in spite of its problems. I hope they don’t just collapse, because competition is a critical force in keeping innovation going and prices low. Their product’s ability to play Playstation 2 games, for example, is an advantage… but I doubt that will be enough, because the old games won’t exploit any of the advantages of a next-generation platform. If Sony can get a massive number of amazingly-good platform-unique games — ones so good that people will choose the Playstation 3 specifically for them — then maybe they can survive. But I doubt they can get that strong a corner on good games; many independents will not want to risk their companies by making single-platform games, especially one as risky as this one, and Sony is unlikely to have the finances to buy them all up or back them enough to eliminate the risks. What is more likely to happen is that there will be a few platform-unique games for Playstation 3, a few platform-unique games for its competitors (particularly XBox 360), and a few multiple-platform games… which means no lock for Sony. In short, things do not look very good right now for Sony; Sony seems to have hoisted themselves on their own petard. I don’t even see what they can do now to recover.
I think Jonathan V. Last of the Philadelphia Inquirer has it right: “Obsessed with owning proprietary formats, Sony keeps picking fights. [And] It keeps losing. And yet it keeps coming back for more, convinced that all it needs to do is push a bigger stack of chips to the center of the table. If Blu-ray fails, it will be the biggest home-electronics failure since Betamax. If it drags PlayStation 3 down with it, it will be one of the biggest corporate blunders of our time.”
path: /misc | Current Weblog | permanent link to this entry
My upcoming presentations - Date change and a new page
I’m still giving a presentation at NovaLUG, but the date has been changed from July 1 to July 8 (2006). This is because July 4 is a U.S. holiday (independence day), and there was concern that some people might not be able to come. So it will now be July 8, 10am, “Free-Libre/Open Source Software (FLOSS) and Security”. Washington Technology Park/CSC (formerly Dyncorp), 15000 Conference Center Drive, Chantilly, VA.
This has convinced me that I need a page to help people find when and where I’m speaking, so that they don’t have to march through my blogs to get the information. So here it is…
Presentations by David A. Wheeler. Just click on it, and you’ll get the latest times, places, etc., of where to go if you just can’t find something better to do with your life :-).
path: /website | Current Weblog | permanent link to this entry
Autonumbering supported in Firefox 1.5!
Here’s another reason to use Firefox as your web browser, besides the fact that Firefox has a better security record and that Firefox has better support for web standards in general. Firefox 1.5 has added autonumbering support, and sites like mine are starting to use it. If you’re using a non-compliant web browser, like the current version of Internet Explorer, you’re missing out. But let’s back up a bit to the basics: HTML.
HTML has been a spectacularly successful standard for sharing information - web pages around the world use it. I write a lot of my papers directly in HTML, because it’s easy, using HTML makes them easily accessible to everyone, and it’s a completely open standard.
But HTML has several weaknesses if you’re writing long or technical reports. One especially important one is automatic numbering of headers: the original HTML specification can’t do it. When you’re reading a long report, it can be hard to keep track of where you are, so having every heading numbered (such as “section 2.4.3”) is really helpful. This can be solved by having programs directly insert the heading numbers numbers into the HTML text, but that’s a messy and kludgy solution. It’d be much better if browsers automatically added numbered headings where appropriate, so that the HTML file itself is simple and clean.
The W3C (the standards group in charge of HTML and related standards) agreed that automatic numbering was important, and included support for automatic numbering in the Cascading Style Sheets (CSS) standard way back in 1998. CSS is an important support standard for HTML, so that should have been it… but it wasn’t. Both Netscape and Microsoft decided to not fully implement the standard, nor try to fix the standard so that they would implement it. Soon afterwards Microsoft gained dominant market share, and then let their browser stagnate (why bother improving it, since there was no competition?). It looked like we, the users, would never get basic capabilities in HTML like auto-numbering.
I’m happy to report that Firefox 1.5 has added support for auto-numbering headings and other constructs too. So I’ve modified my CSS file for papers and essays so it auto-number headings; I’ve released my CSS file under the MIT/X license, so anyone else can use it. If you develop web content you may want to look at examples like mine for a reason, because…
It turns out that the story is more complicated. In the process of implementing auto-numbering, the Firefox developers found a serious problem with the CSS specification. Oops! The Mozilla Firefox bug #3247 and David Flanagan’s blog discuss this further. The Firefox developers talked with the W3C, and the W3C ended up creating “CSS 2.1”, an updated/patched version of the CSS2 standard that is in the process of being formally released.
What this means is that the examples for autonumbering in the “official” original CSS2 standard won’t actually work! Instead, you need to follow a slightly different approach as defined in the patched CSS2.1 specification.
Technical stuff: The basic problem involves scoping issues. To solve it, the counter-reset property must be in the main heading names (like h1, h2, etc.), and not in the “before” sections (like h1:before, h2:before, etc.) - in spite of all the examples in the original CSS2 spec. You can put counter-increment in either place, though the spec puts them in the :before sections so I have too.
Now people have yet another reason to upgrade to Firefox. Firefox has had better standards support for some time; there are now many sites that won’t display properly (or as well) if your browser doesn’t support the standards well. But here is a clear and functionally important difference.
I’m a big believer in standards, but they can only help users if they are implemented, and they will only be implemented if users demand standards compliance. I think that the more people switch to standards-compliant browsers, and the more that sites use standards (to encourage people to switch), the more pressure it will bring on the other browser makers to catch up. And that would be great for all computer users.
More broadly, this is also a good example of why it’s important to have implementations try out standards before they are frozen; they help avoid mistakes like this. Today, essentially every successful open standard is implemented by free-libre/open source software (FLOSS) - this makes sure that the standard is implementable, helps all understand what the standard means, and also helps other developers understand at least one way to implement it. This doesn’t mean standards aren’t important; standards are vital! And this also shows that when a mistake is made by a standards body, life is not over; standards bodies can work with implementors to fix problems. In fact, this shows that the best standards are those created from an interplay between standards developers and implementors, where standards are then made official after actual implementation experience.
path: /oss | Current Weblog | permanent link to this entry
I’ll be speaking at some Linux User Groups (LUGs)
A while ago I was asked to speak at some of the Linux User Groups (LUGs) in the Washington, DC area, and I agreed to do so. Here are the current plans, if you are interested in hearing me speak:
Plans may change, but this is the information I have for now.
path: /oss | Current Weblog | permanent link to this entry
High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)
Recently I spoke at an Open Group conference and gave my presentation on Open Source Software and Software Assurance (Security). While there, someone asked a very interesting question: “What is the relationship between high assurance and open source software?” That’s a fair question, and although I gave a quick answer, I realized that a longer and more thoughtful answer was really needed.
So I’ve just posted a paper to answer the question: High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS). For purposes of the paper, I define “high assurance software” as software where there’s an argument that could convince skeptical parties that the software will always perform or never perform certain key functions without fail. That means you have to show convincing evidence that there are absolutely no software defects that would interfere with the software’s key functions. Almost all software built today is not high assurance; developing high assurance software is currently a specialist’s field. But I think all software developers should know a little about high assurance. And it turns out there are lots of connections between high assurance and FLOSS.
The relationships between high assurance and FLOSS are interesting. Many tools for developing high assurance software are FLOSS, which I can show by examining the areas of software configuration management, testing, formal methods, analysis implementation, and code generation. However, while high assurance components are rare, FLOSS high assurance components are even rarer. This is in contrast to medium assurance, where there are a vast number of FLOSS tools and FLOSS components, and the security record of FLOSS components is quite impressive. The paper then examines why this is the circumstance. The most likely reason for this appears to be that decision-makers for high assurance components are not even considering the possibility of FLOSS-based approaches. The paper concludes that in the future, those who need high assurance components should consider FLOSS-based approaches as a possible strategy.
Anyway, it’s a thought piece; if you’re interested in making software that is REALLY reliable, I hope you’ll find it interesting.
Again, the paper is here: High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS).
path: /oss | Current Weblog | permanent link to this entry
Open Invention Network (OIN), software patents, and FLOSS
Software patents continue to threaten the software industry in the U.S. and some other countries. They’re especially a threat to smaller software development organizations and individuals (who do most of the innovating), but even large software organizations are vulnerable.
If you’re not familiar with the problems of software patents, here’s some information and resources to get you started. The FFII patented webshop is a short demonstration of why the problem is so serious — practically all commercial websites infringe on patents already granted (though not currently enforceable) in Europe. Bessen and Hunt found good evidence that software patents replace innovation instead of encouraging it. In particular, they found that those who create software patents are those who do less research, and the primary use of software patents appears to be in creating a “patent thicket” to inhibit competition. There’s also the evidence of history; software is the only product that can be protected by both copyright and patent, yet there’s general agreement that the software industry was far more innovative when patents were not permitted. Copyright is sufficient; there’s no need for software patents, which simply impede innovation. Software patents were not even voted in; software patents are an example of U.S. courts creating laws (which they’re not supposed to do). India and Europe have (so far) wisely rejected software patents. Some large companies (such as Oracle Corporation and Red Hat) have clearly said that they oppose software patents, but that they must accumulate such patents to defend against attack by others. Some organizations are working against software patents, such as the FFII, No Software Patents, and League for Programming Freedom. Groklaw has a massive amount of information on software patents. Yet currently the U.S. continues this dangerous practice, and so people are trying to figure out how to deal with it until the practices can be overturned. In particular, free-libre / open source software (FLOSS) developers have been trying to figure out how to deal with software patents, and there has been some progress on that front.
Recently a new organization has been added to the mix: the “Open Invention Network” (OIN) . Its website is short on details, but Mark H. Webbink’s article “The Open Invention Network” in Linux Magazine (April 2006, page 18) gives more information. Webbink reports that OIN was founded by IBM, Novell, Philips, Red Hat, and Sony, for two distinct purposes:
The list of key applications considered by OIN, according to Webbink, includes Apache, Eclipse, Evolution, Fedora Directory Server, Firefox, GIMP, GNOME, KDE, Mono, Mozilla, MySQL, Nautilus, OpenLDAP, OpenOffice.org, Perl, PostgreSQL, Python, Samba, SELinux, Sendmail, and Thunderbird. Of course, it’d be nice if OIN protected everything with a FLOSS license, not just a listed set. Webbink says they chose to not do that because the “size of the safe area is critical to OIN’s future. If the commons is too narrow, it offers little protection. If it’s too broad, it makes it difficult for a company to join the commons…” Then, if someone tries to bring a patent infringment lawsuit against any of these projects, the OIN can take various kinds of actions. You can get more information via the Wikipedia article on the Open Invention Network (OIN). I suspect that even a FLOSS product that’s not on the list might still get some protection, because a software patent claim brought against that product might also apply to one of the covered applications… and OIN might try to pre-empt that. But still, even given its many limitations, this is a step forward in reducing risks of developers and users.
Another patent commons project for FLOSS programs is the Patent Commons Project, whose contributors and supporters include Computer Associates, IBM, Novell, OSDL, Red Hat, and Sun Microsystems. This is a much looser activity; it is simply a repository where “patent pledges and other commitments can be readily accessed and easily understood.”
The U.S. Patent and Trademark Office (PTO) has had some talks about using FLOSS as examples of prior art, as well as other ways to try to reduce the number of unqualified yet granted patents. I guess it’s good that the PTO is trying to prevent some invalid patents from slipping through; it’s better than the current practice of rubber-stamping massive numbers of patents that are actually illegal (because they actually don’t meet the current criteria for patents). It’s particularly galling that someone can read another’s publicly-available code, get a patent, and then sue the original creator of the code into oblivion — it’s not legal, but those who do it aren’t punished and are usually handsomely rewarded. (Yes, this happens.) How does that help advance anything? But the notion that the PTO cannot currently look at prior art, due to its own boneheaded rules and absurd timelines, merely shows how broken the whole process is. I suspect patents are worth their problems in some other industries, but in software we’ve now thoroughly demonstrated them as a failure.
The U.S. Constitution only permits patents when they “advance the arts and sciences”; since this is not true for software patents, software patents need to be abolished immediately. Still, until that happens, half-steps like OIN and getting the U.S. PTO to reject illegal patents (for a change!) will at least reduce some of the risks software developers face.
path: /oss | Current Weblog | permanent link to this entry
Open standards, open source, and security too — LinuxWorld 2006 and a mystery
As I had previously threatened, I gave my talk on “Open Standards and Security” on April 4, 2006, at LinuxWorld’s “Government Day” focusing on open standards. My talk’s main message was that open standards are necessary (in the long run) for security, and I gave various reasons why I believe that. I also tried to show how important open standards are in general. In the process, a mystery was revealed, but first let me talk about the NewsForge article about my talk.
Unbeknownst to me, there was a reporter from NewsForge in the audience, who wrote the article Why open standards matter — and it specifically discussed my talk! Which was pretty neat, especially since the article was very accurate and complimentary. I used several stories in my talk, which the reporter called “parables”. I didn’t use that word, but I wish I had, because that’s exactly what they were. For example, I talked about a (hypothetical) magic food, that cost only $1 the first year and you wouldn’t need to eat anything else for a year… but it would make all other foods poisonous to you, and there was only one manufacturer of magic food. I created this parable to show that complete dependency on someone else is a serious security problem… if you’re so dependent that you cannot switch suppliers (practically), you already have a serious security problem. I was especially delighted that she included my key comment that my “magic food” parable wasn’t about any particular supplier (Microsoft or Red Hat or anyone else)… we need suppliers, the problem comes when we allow ourselves to become dependent on a supplier. I also discussed the 1904 Baltimore fire (where incompatible firehose couplings were a real problem), and the railroad gauge incompatibilities in the mid-1860s in the southern U.S. (this was a contributing factor to the Confederacy’s loss in the U.S. Civil War).
I did find one nitnoid about the article, which doesn’t change anything really but is great for showing how messy and complicated real history is. The article says that in the 1904 Baltimore fire, none of the firetrucks from other cities could connect. I said something almost like that, but not quite. What I actually said was that firetrucks from other cities had firehose couplings that were incompatible with Baltimore’s hydrants. I read a lot more about this event than I could mention in my presentation, which is why I said what I said in that funny way. It turns out that a few firefighters did manage to jerry-rig “connections” between some of the incompatible couplings, by wrapping lots of hoses around the hydrants and couplings. This is a perfect example of a “correction” nitnoid that just doesn’t matter, because you can probably guess the result — the jerry-rigged connections had lots of water on the ground (around the hydrant), and disturbingly little on the fire. So while technically there were some “connections” to hydrants by the firetrucks from other cities, they weren’t effective enough, and the bottom line is just as the article indicated: Baltimore burned. In short, the firehose incompatibilities between cities resulted in over 2,500 buildings being lost, almost all of them unnecessarily. I’m not sure what it says about me that I note this weird little issue, which isn’t important at all, but I’m sure that correcting it will require a lot of therapy.
So if you haven’t taken a look at it, take a peek at the “Open Standards and Security” presentation. I hope to eventually get an audio file posted; look for it. When I gave the presentation I had several props to make it more interesting, which you’ll just have to imagine:
Now, on to the mystery.
One of the people at my talk made the claim that, “today, every successful open standard is implemented by FLOSS.” That should be easy to disprove — all I need is a counter-example. Except that counter-examples seem to be hard to find; I can’t find even one, and even if I eventually find one, this difficulty suggests that there’s something deeper going on.
So as a result of thinking about this mystery, I wrote a new essay, titled Open Standards, Open Source. It discusses how open standards aid free-libre / open source software (FLOSS) projects, how FLOSS aids open standards, and then examines this mystery. It appears that it is true — today, essentially every successful open standard really is implemented by FLOSS. I consider why that is, and what it means if this is predictive. In particular, this observation suggests that an open standard without a FLOSS implementation is probably too risky for users to require, and that developers of open standards should encourage the development of at least one FLOSS implementation.
path: /oss | Current Weblog | permanent link to this entry
I’ve put two presentations on my website you might find of interest.
The first one is Open Source Software and Software Assurance. Here I talk about Free-Libre / Open Source Software (FLOSS) and its relationship to software assurance and security. It has lots of actual statistics, and a discussion on review. I also deal with the chestnut “can’t just anyone insert malicious code into OSS?” — many questioners don’t realize that attackers can change proprietary software too (attackers generally don’t worry about legal niceties); the issue is the user’s supply chain. I gave this presentation at FOSE 2006 in Washington, DC, and I’ve given variations of this presentation many times before.
The second presentation is “Open Standards and Security”. Here I focus on the role of open standards in security, which turns out to be fundamental.
I’ll be giving the “Open Standards and Security” presentation at the “LinuxWorld Government Day: Implementing Open Standards” track, April 4, 2006, in Boston, Massachusetts. I’ll speak at 12:45, so come hear the presentation… you’ll miss much if you only read the slides.
path: /security | Current Weblog | permanent link to this entry
Unsigned characters: The cheapest secure programming measure?
Practically every computer language has “gotchas” — constructs or combinations of constructs that software developers are likely to use incorrectly. Sadly, the C and C++ languages have an unusually large number of gotchas, and many of these gotchas tend to lead directly to dangerous security vulnerabilities. This forest of dangerous gotchas tends to make developing secure software in C or C++ more difficult than it needs to be. Still, C and C++ are two of the most widely-used languages in the world; there are many reasons people still choose them for new development, and there’s a lot of pre-existing code in those languages that is not going to be rewritten any time soon. So if you’re a software developer, it’s still a very good idea to learn how to develop secure software in C or C++… because you’ll probably need to do it.
Which brings me to the “-funsigned-char” compiler option of gcc, one of the cheapest secure programming available to developers using C or C++ (similar options are available for many other C and C++ compilers). If you’re writing secure programs in C or C++, you should use the “-funsigned-char” option of gcc (and its equivalent in other compilers) to help you write secure software. What is it, and what’s it for? Funny you should ask… here’s an answer!
Let’s start with the technical basics. The C programming language includes the “char” type, which is usually used to store an 8-bit character. Many internationalized programs encode text using UTF-8, so a user-visible character be stored in a sequence of “char” values. but even in internationalized programs text is often stored in a “char” type.
The C standard specifically says that char CAN be signed OR unsigned. (Don’t believe me? Go look at ISO/IEC 9899:1999, section 6.2.5, paragraph 15, second sentence. So there.) On many platforms (such as typical Linux distributions), the char type is signed. The problem is that software developers often incorrectly think that the char type is unsigned, or don’t understand the ramifications of signed characters. This misunderstanding is becoming more common over time, because many other C-like languages (like Java and C#) define their “char” type to be essentially unsigned or in a way that it wouldn’t matter. What’s worse, this misunderstanding can lead directly to security vulnerabilities.
All sorts of “weird” things can happen on systems with signed characters. For example, the character 0xFF will match as being “equal” to the integer -1, due to C/C++’s widening rules. And this can create security flaws in a hurry, because -1 is a common “sentinel” value that many developers presume “can’t happen” in a char. A well-known security flaw in Sendmail was caused by exactly this problem (see US-CERT #897604 and this posting by Michal Zalewski for more information).
Now, you could solve this by always using the unambiguous type “unsigned char” if that’s what you intended, and strictly speaking that’s what you should do. However, it’s very painful to change existing code to do this. And since many pre-existing libraries expect “pointer to char”, you can end up with tons of useless warning messages when you do that.
So what’s a simple solution here? A simple answer is to force the compiler to always make “char” an UNSIGNED char. A portable program should work when a char is unsigned, so this shouldn’t require any changes to that code. Since programmers often make the assumption, let’s make their assumption correct. In the widely-popular gcc compiler, this is done with the “-funsigned-char” option; many other C and C++ compilers have similar options. What’s neat is that you don’t have to modify a line of source code; you can just slip this option into your build system (e.g., add this option to your makefile). This is typically very trivial to do; typically you can just modify (or set) the CFLAGS variable to add this option, and then recompile.
I also have more controversial advice. Here it is: If you develop C or C++ compilers, or you’re a distributor who distributes a C/C++ compiler… make char unsigned by default on all platforms. And if you’re a customer, demand that from your vendor. This is just like similar efforts going on in operating system sales to users; today operating system vendors are changing their systems so that they are “secure by default”. At one time many vendors’ operating systems were delivered with all sorts of “convenient” options that made them easy to attack… but getting subverted all the time turned out to be rather inconvenient to users. In the same way, development tools’ defaults should try to prevent defects, or create an environment where defects are less likely. Signed characters are basically a vulnerability waiting to happen, portable programs shouldn’t depend on a particular choice, and non-portable software can turn on the “less secure” option when necessary. I doubt this advice will be taken, but I can suggest it!
Turning this option on does not save the universe; most vulnerabilities will not be caught by turning on this little option. In fact, by itself this is a very weak measure, simply because by itself this doesn’t counter most vulnerabilities. You need to know much more to write secure software; to learn more, see my free book on writing secure programs for Linux and Unix. But stick with me; I think this is a small example of a much larger concept, which I’ll call no sharp edges. Chain saws are powerful — and dangerous — but no one puts scissor blades next to the chain saw’s handle. We try to make sure that “obvious” ways of using tools are not dangerous, even if the tool itself can do dangerous things. Yet the “obvious” ways to use many languages turn out to lead directly to security vulnerabilities, and that needs to change. You can’t prevent all misuse — a chain saw can be always be misused — but you can at least make languages easy to use correctly and likely to do only what was intended (and nothing else).
We need to design languages, and select tools and tool options, to reduce the likelihood of a developer error becoming a security vulnerability. By combining compiler warning flags (like -Wall), defaults that are likely to avoid dangerous mistakes (like -funsigned-char), NoExec stacks, and many other approaches, we can greatly reduce the likelihood of a mistake turning into a security vulnerability. The most important security measure you can take in developing secure software is to be paranoid — and I still recommend paranoia. Still, it’s hard to be perfect all the time. Currently, a vast proportion of security vulnerabilities come from relatively trivial implementation errors, ones that are easy to miss. By combining a large number of approaches, each of which counter a specific common mistake, we can get rid of a vast number of today’s vulnerabilities. And getting rid of a vast number of today’s vulnerabilities is a very good idea.
path: /security | Current Weblog | permanent link to this entry
Random Quotes and Code - Why You Need a Community
You need a community, not just some dump of posted code, if you want good open source software. I can demonstrate this through my trivial hunt for “random quote” code… so let me tell you my story.
I recently decided that I’d like the front page of my website to show a randomly-selected quote. For security reasons, I avoid using dynamically-run code on my own site, so I needed to use Javascript (ECMAscript) to do this. Easy enough, I thought… I’ll just use Google to find a program that did this, and I searched on “random quotation Javascript”.
But what I found was that a lot of people don’t seem to care about long-term maintenance, or correctness. Codelifter’s sample code by etLux does the job, but also shows the problem. The code has a lot of statements like this:
Quotation[0] = "Time is of the essence! Comb your hair."; Quotation[1] = "Sanity is a golden apple with no shoelaces."; ...Does this work? Sure, but it’s terrible for maintenance. Now you have to write extra code, unnecessarily maintain index numbers, and if you want to delete a quote in the middle, you have to renumber things. Even for tiny tasks like this, maintenance matters over time. I’m going to use this for my personal website, which I plan to have for decades; life is too short to fight hard-to-maintain code over a long time.
Even worse, this and many other examples did a lousy job of picking a random quote. Many sample programs picked the random quote using this kind of code (where Q is the number of quotes):
var whichQuotation=Math.round(Math.random()*(Q-1));This actually doesn’t choose the values with equal probability. To see why, walk through the logic if there are only 3 quotes. Math.random returns a value between 0 and 1 (not including 1); if there are 3 quotes, Math.random()*(Q-1) produces a floating point value between 0 and 2 (not including 2). Rounding a value between 0 and 0.5 (not including 0.5) produces 0, between 0.5 and 1.5 (not including 1.5) produces 1, and between 1.5 and 2 produces 2…. which means that the middle quote is far more likely to be selected (it will be selected 50% of the time, instead of the correct 33%). The “round” operation is the wrong operator in this case!
I’m not really interested in picking on the author of this code sample; LOTS of different sample codes do exactly the wrong thing.
The problem seems to be that once some code snippet gets posted, in many places there’s no mechanism to discuss the code or to propose an improvement. In other words, there’s no community. I noticed these problems immediately in several samples I saw, yet there was no obvious way for me to do anything about it.
In the end, I ended up writing my own code. For your amusement, here it is. Perhaps there needs to be a “trivial SourceForge” for taking tiny fragments like this and allowing community improvement.
First, I put this in the head section of my HTML:
<script language="JavaScript"> // Code by David A. Wheeler; this trivial ECMAscript is public domain: var quotations = new Array( "Quote1", "Quote2", "Quote3" ); var my_pick = Math.floor(quotations.length*Math.random()); var random_quote = "Your random quote: <i>" + quotations[my_pick] + "</i>"; </script>
I then put this in the body section of my HTML:
<script language="JavaScript"> document.write(random_quote) </script>
I intentionally didn’t include some defensive measures against bad software libraries. Unfortunately, many software libraries are terrible, and that certainly includes many random number generators. For example, Math.random() isn’t supposed to return 1, only values less than that… but returning an (incorrect) 1 isn’t an unknown defect, and that would cause an out-of-bounds error. Also, many implementations of random() are notoriously bad; they often have trivially tiny cycles, or fail even trivial randomness tests. I would put defensive measures in software designed to be highly reliable or secure (for example, I might re-implement the random function to be sure I got a reasonable one). But in this case, I thought it’d be better to just rely on the libraries… if the results are terrible, then users might complain to the library implementors. And if the library implementors fix their implementations, it helps everyone.
I donate the above snippet to the public domain. It’s not clear at all that it can be copyrighted, actually; it’s far too trivial. But it’s still useful to have such snippets, and I hope that someone will organize a community for sharing and maintaining trivial snippets like this.
path: /oss | Current Weblog | permanent link to this entry
In memorium: William Alson (“Al”) Wheeler
William Alson (“Al”) Wheeler (2 March 1916 – 28 February 2006) was my grandfather and a wonderful, godly man. He recently went to be with the Lord, and I will miss him very much until my time comes. Many of my own traits (love of music, math, science fiction, science, and learning in general) are easily traced back to him. He loved jokes and humor; he laughed often, and his eyes often twinkled.
He demonstrated his extraordinary character throughout his life; a few anecdotes will have to suffice. His love of learning was extraordinary; in his 80s he started learning koine Greek, and near the time of his death he was reading the 984-page “Chaos and Fractals: New Frontiers of Science” (a book full of mathematical concepts). He dedicated his life to serving others; he was a music minister for over 45 years. He prayed, and prayed often, for his family and friends. When he last moved to Pennsylvania, he donated his mechanic’s tools to the Smithsonian, which had been hunting for the kind of tools he had. And even in death he served others; rather than having his body be buried, he donated his body for medical research. I am honored that I can count myself as one of his grandchildren. He did not leave riches behind; he left behind something much greater.
Below is a short biography of his life, as printed at his memorial service on March 4, 2006. May we all strive to have such a positive biography. “Children’s children are a crown to the aged, and parents are the pride of their children.” (Proverbs 17:6, (NIV)) (New International Version)
Al was born in the small town of Pottsville, PA. He grew up in Oxford, PA and graduated in 1933 from Oxford High School. He worked a couple years with the Wheeler and Sautelle and then the Wheeler and Almond Circuses. In 1935 he moved to Reading, PA, and entered the Wyomissing Polytechnic Institute to become a machinist. He met his future wife, Mary Clouse, in 1938 at a YWCA sponsored dance. It was during this time that Al taught himself to play tennis and the clarinet. He came from a musical family and had begun singing with the Choral Society in Reading. He began working at the Textile Machineworks in Wyomissing, but found a job in early 1940 with the Federal Government at the Naval Gun Factory in Washington, DC. He married Mary in June 1940. While living in southeast Washington, they had three children: Bill, Ray and Joyce. In 1946, after a few interim jobs, he began working for the Bureau of Standards followed by the Naval Research Laboratory. He moved to the Bureau of Naval Weapons in 1957. He moved his family to Maryland in 1958. In 1962 he made his last career move to the Naval Air Systems Command and retired from there in 1974. Throughout this time he should have received a chauffeur’s license since he performed that duty extensively. Mary went home to be with the Lord in 1996. In 1999 he moved to Perkiomenville, PA to live with his daughter, son-in-law Phil, and grandson Bryan.
After moving to Washington the war initially kept him out of church activities. Mary had started attending Fountain Memorial Baptist Church. After the war, Al started attending also. They professed Christ as Lord and Savior and were baptized together. In 1950, Al started directing the Junior Choir, grades 4 thru 6. In 1953 he became the Music Director at Fountain Memorial and spent the next 45 years directing music in 6 different churches. He loved his music and his collection of choir music, mini orchestral scores, records, reel-to-reel tapes, cassette tapes, and CDs attest to it. His primary instrument was the clarinet, but he had obtained and played saxophone, flute, and trombone and, at one point, two synthesizers.
Although work and family responsibilities cut down on his tennis activities, he never gave them up completely. After retirement he taught tennis part time for the Maryland Department of Recreation and was regularly playing with his friends until moving back to PA in 1999.
He loved science fiction and one of his favorite pastimes was solving the “Word Power” article in the Reader’s Digest. He rarely missed those words.
Al had a wonderful life and was adored by all his family. He is survived by three children, five grandchildren, and 7 great-grandchildren.
path: /misc | Current Weblog | permanent link to this entry
GPL v3: New compatibilities, with potentially profound impacts
Finally, there’s a draft version of the GNU General Public License (GPL) version 3. Lots of people have looked at it, and commented on it in general, so I won’t try to cover the whole thing in detail. ( Groklaw covers the differences between version 2 and 3, for example.) A few highlights are worth noting, in particular, it’s surprisingly conservative. This GPL draft changes much less in the license than many expected, and the changes were long expected. As expected, it continues to combat software patents; it has more clauses about that, but at first blush its built-in “aggression retaliation clause” is surprisingly narrow. It counters digital restrictions management (like Sony’s ham-handed attacks on customers in 2005), but that is unsurprising too. It’s longer, but primarily because it defines previously undefined terms to prevent misunderstanding, so that is a good thing.
What has not gotten a lot of press yet — and should — is that the new GPL will make it much easier to combine software from different sources to create new products. This could result in many more free-libre/open source software (FLOSS) programs being developed, and might have very profound impacts.
A key reason that FLOSS programs have become such a powerful economic force is because it’s easy to combine many different pieces together quickly into a larger solution, without requiring large sums of money to get use rights, and anything can be modified arbitrarily. As more people find use for FLOSS programs, a small percentage end up making improvements (to help themselves), and contribute them to the projects (typically so they can avoid the costs of self-maintenance). After it reaches a critical mass, this can snowball into a program becoming a dominant force in its niche; it’s hard to compete against a program used by millions and supported by thousands of developers, even if you have an unlimited budget.
But this snowballing effect only works if you can combine pieces together and modify them in new, innovative ways. As I noted in my essay Make Your Open Source Software GPL-Compatible. Or Else, it is a serious problem when free-libre/open source software (FLOSS) is released that isn’t GPL-compatible. Since most FLOSS software is released under the GPL, a program that is not compatible with this dominant license creates situations where the same software has to be written twice, for no good reason. Most people have heeded that advice, but for various reasons not all. There’s been a related effort to reduce the number of licenses accepted (or at least recommended) by the OSI, for the same basic reason: license incompatibilities create trouble if you want to combine software components together.
The new GPL text addresses this by allowing a few specific restrictions to be added, such as requiring renaming if you make your own version, or forbidding the use of certain names as endorsements. Two licenses in particular that were incompatible with GPL version 2 — but appear to be compatible with GPL version 3 draft 1 — are the PHP 3.01 license, used by the widely-used PHP language and libraries, and the Apache License version 2.0, used by not only the #1 web server Apache but also by a variety of other web infrastructure and Java components. Both of these licenses include limits on how you can use certain names, for example, and these limitations are acceptable in GPL version 3 draft 1. No one has had a chance to do an in-depth analysis, yet, and there are more drafts to come… but the current direction looks promising.
All is not perfect, of course. One license that causes many problems is the OpenSSL license; it has variations of the old “obnoxious advertizing clause” license that have been thorns in the side of many for years. I think it’s unlikely that this would get changed; such clauses can really harm many FLOSS-based businesses (they can’t afford to put 10,000 names on every piece of advertisement). The GPL isn’t compatible with proprietary software licenses, either, but that is by design; the whole purpose of the GPL is to allow software to be shared between users.
In any case, this looks like a good start, and will probably mean that many more people will be able to use (and create) FLOSS programs in the future.
path: /oss | Current Weblog | permanent link to this entry
Python Typechecking: The Typecheck module
About a year ago I started creating a Python library to support better typechecking in Python. Python is a fun language, but errors often hide far longer than they would in other languages, making debugging more necessary and painful than it needs to be. A trivial typo in a field setting cannot be caught by Python, for example, and type errors are never caught until an attempt is made to USE the value… which may be far, far later in the program. I really miss the ability of other languages to automatically check types, so that mistakes can be identified more directly. But I never got around to finishing my typechecking module for Python - there were just too many other things to do. Which is just as well, because someone else has done a better job.
Typecheck is a type-checking module for Python by Collin Winter and Iain Lowe; it “provides run-time typechecking facilities for Python functions, methods and generators.” Their typecheck module provides many more useful capabilities than my early typecheck module. In particular, they handle variable-length parameter lists and other goodies. These capabilities, like the assert statement, make it much easier to detect problems early… and the earlier you can detect problems, the easier it is to figure out why the problem happened.
The biggest trouble with the current verison of typecheck is that it isn’t easy to specify the right types. Since Python hasn’t had typechecking before, it doesn’t have built-in names for the types that you actually want to check against. For example, conceptually int (integer), long (arbitrarily long integer), and float are all subclasses of another type named “Number”… but there isn’t actually a type named Number to compare against (or inherit from, or implement as an interface). The same is true for “Sequence”… the Python documentation is full of discussions about Sequence, but these are merely conceptual, not something actually in the language itself. Even in cases where there is a type, such as “basestring” meaning “any string-like type”, is a type not known about by many Python developers.
Typechecking only works when people actually specify the right types, of course. If you are too restrictive (“I want only ‘int’” when any number will do), then typechecking is a problem not a help. Hopefully the typecheck implementors will find a way to define the types that people need. In my mind, what’s needed is a way to define an Interface (a list of required methods) that has an efficient implementation (e.g., it’s a singleton that caches the types that match). Then they can define critical types using the Interface type.
I look forward to using the typecheck module, once they add enough type definitions to use it well!
path: /oss | Current Weblog | permanent link to this entry