There are many general security principles which you should be familiar with; one good place for general information on information security is the Information Assurance Technical Framework (IATF) [NSA 2000]. NIST has identified high-level “generally accepted principles and practices” [Swanson 1996]. You could also look at a general textbook on computer security, such as [Pfleeger 1997]. NIST Special Publication 800-27 describes a number of good engineering principles (although, since they’re abstract, they’re insufficient for actually building secure programs - hence this book); you can get a copy at http://csrc.nist.gov/publications/nistpubs/800-27/sp800-27.pdf. A few security principles are summarized here.
Often computer security objectives (or goals) are described in terms of three overall objectives:
Confidentiality (also known as secrecy), meaning that the computing system’s assets can be read only by authorized parties.
Integrity, meaning that the assets can only be modified or deleted by authorized parties in authorized ways.
Availability, meaning that the assets are accessible to the authorized parties in a timely manner (as determined by the systems requirements). The failure to meet this goal is called a denial of service.
In any case, it is important to identify your program’s overall security objectives, no matter how you group them together, so that you’ll know when you’ve met them.
Sometimes these objectives are a response to a known set of threats, and sometimes some of these objectives are required by law. For example, for U.S. banks and other financial institutions, there’s a new privacy law called the “Gramm-Leach-Bliley” (GLB) Act. This law mandates disclosure of personal information shared and means of securing that data, requires disclosure of personal information that will be shared with third parties, and directs institutions to give customers a chance to opt out of data sharing. [Jones 2000]
There is sometimes conflict between security and some other general system/software engineering principles. Security can sometimes interfere with “ease of use”, for example, installing a secure configuration may take more effort than a “trivial” installation that works but is insecure. Often, this apparent conflict can be resolved, for example, by re-thinking a problem it’s often possible to make a secure system also easy to use. There’s also sometimes a conflict between security and abstraction (information hiding); for example, some high-level library routines may be implemented securely or not, but their specifications won’t tell you. In the end, if your application must be secure, you must do things yourself if you can’t be sure otherwise - yes, the library should be fixed, but it’s your users who will be hurt by your poor choice of library routines.
A good general security principle is “defense in depth”; you should have numerous defense mechanisms (“layers”) in place, designed so that an attacker has to defeat multiple mechanisms to perform a successful attack.
For general principles on how to design secure programs, see Section 7.1.