Most Dangerous Programming Errors
A pair of IT industry organizations worked with a broad panel of industry and security experts to create a list of the
top 25 most dangerous programming errors. The list is intended to give developers, dev managers, trainers and others the ability to target common mistakes and produce more robust code.
The project, headed by the not-for-profit MITRE Corporation and IT certification and security outfit SANS Institute, published its findings yesterday. A brief look at the list reveals plenty of well-known flaws, like "CWE-89: Failure to Preserve SQL Query Structure (aka 'SLQ Injection')" or "CWE-79: Failure to Preserve Web Page Structure (aka 'Cross-site Scripting')."
Ryan Barnett is the director of application security research for Breach Security and a participant in the project. He said there are essentially two types of programming flaws: security bugs like SQL injection that can be addressed via secure code review and QA testing, and architectural flaws that often only show themselves in production environments.
I asked Barnett which three coding errors from the 25 published he would pick as the most damaging. He chose these:
Insufficient Input Validation: "Software coding flaws emerge mainly from a lack of developers understanding one basic principle: Malicious users will not do what you expect them to do," Barnett wrote. He said failure to validate input "is the No. 1 issue affecting Web-based applications and gives rise to attacks such as SQL injection."
Improper Encoding or Escaping of Output: This is a leading issue behind cross-site scripting (XSS) attacks. Barnett said Web apps often lose track of user-supplied data, failing to properly encode the output to HTML when returned to the user. "This allows attackers to send malicious JavaScript to other users that will execute within their browsers," Barnett wrote.
Error Message Information Leakage: Barnett complained that Web apps "give out way too much information when they encounter errors," allowing attackers to piece together a view of the overall system. In the worst case, he wrote, "They can even use the error pages as the conduit to extract out customer data from databases."
So what needs to happen? Barnett said the definition of what constitutes a completed software project must change.
"From a development perspective, most developers get paid to produce applications with certain 'functional' requirements. If their application doesn't 'do' what it was supposed to do, then they won't get paid," Barnett wrote. "So, when business owners add in specific contractual language that mandates that the completed applications must not only meet functional requirements, but also must confirm that the application is free of these defects, then and only then do I believe that we will 'magically' see programmers becoming more proficient at producing secure code."
What do you think it will take? E-mail me at mdesmond@reddevnews.com.
Posted by Michael Desmond on 01/13/2009