Events

Towards a LangSec-aware SDLC

At the TROOPERS’15 Jacob l. Torrey held a track about LangSec-Aware Software Development Lifecycle. He talked about programming conventions and what tools can be used for enforcing the compliance. There is a lack of metrics to understand what make software more secure or less secure. His main goals was to show that LangSec has far-reaching impacts into software security and to give the audience a framework to transform the theory into practice. A SLDC should help to find bugs sooner in the development process and reduce defect rate in production thereby. A lower defect rate in production does not only improve security it also reduces costs.

In his first part he talked about theoretical background. One of it was the Halting Problem, which ask to determine if a program will halt on a given input. This task seems to be easy but is difficult to verifiy and in general, on Turing-complete programs, this is provably undecidable.

Another is the Undecidability Cliff: It says that when complexity increases, verification difficulty increases, too. Once Turing-completeness is hit, you’ve fallen off the verification cliff.

Verification is need because developers are not infallible: “Trust but verify”. But verifcation is a “hard problem”, an example he showed is the Crackaddr a remote code exec in sendmail.

Parsing is also an important part in LangSec: “A data specification should be generated first, and non-compliant input rejected!”

 

Jacobs showed in his second part programming conventions. One of these are the MISRA-C programming guidelines for safety-critical code which is used for automotive control code. There are many free and commercial tools to validate code for that.

Another is the “top ten” for safety-critical code from NASA’s JPL. The goals are to reduce defects, ease verification for code running in space. Most of them map nicely to LangSec principles. He mentioned some rules in his talk:
– Rule #1: Restrict to simple to verify control-flow where possible
– Rule #2: All loops must have an upper bound on iterations
– Rule #3: Memory allocations should all be performed before business logic execution
– Rule #7: Check all parameters in each function
– Rule #9: Limit use of pointers and disallow function pointers

A main part of strict parsing is that invalid input must be rejected. A specified interface will ease interactions between teams and components. Jeff Bezos mandated that all Amazon software will act as a “service”, which lead to its dominance over the cloud market.

The Verification-Oriented Paradigm is a meta development paradigm, which aims to provide the maximum semantic information about the intent of the code to the compiler and verification tools: One example is that types should not be overloaded, so char should only used for characters and not for small integer. Another example is the that the data scope should be minimized: objects that can’t be referenced can’t be corrupted. The benefits are that the code is more self-documenting and the verifcation is easier: Bugs can be detected at compile time not at runtime.

Another method for better security in application is reducing complexity. More complex programs lead to more bugs. The least privilege principle is one example for that.

 

Jacobs last part was about the tools for enforcing it. It follows the principle “Trust but verify”. A good SDLC formalizes development process to allow checks of compliance, like code reviews, continuous integration tools or unit testing.

One tool presented by Jacob is Crema. It’s a research project to create an open-source compiler for a provably-halting programming language and runtime.

Many mainstream parsers, like Lex/Yacc are designed with code in mind. Instead of this the tools Hammer and Nom are designed with data parsing in mind.

Another tool is Nail by Julian Bangert et al. which automatically generate parsers from grammar description. This reduces the risk of implementation or security concerns when parsing a structure into memory from untrusted input. Ambiguous parsing (e.g., whitespace) will prevent reversing parsing steps. As an example he presented a personnel database.

A part of Apple’s Xcode IDE is the LLVM/Clang Static Analyzer which performs static checks for common programming errors. An example he presented there was a forgotten NULL-check after malloc.

A tool for dynmaic anylsis is LLVM/KLEE Dynamic Analyzer which performs it through symbolic computation. A SMT/SAT solver used to create concrete value.

Another tool is AFL-Fuzz: A fuzzer that compiles in instrumentation to improve coverage.

A very interesting enforcement tool Sledge Hammer which replaces calls to malloc(), calloc(), realloc() with one that fails with a designated probability.

The last two tools he presented are Poison Pill, a header file to be included that “poisons” banned keywords and KLEE-Unit which couples KLEE (symbolic execution) with unit test methodology.

 

At his conclusion he mentioned: “Whatever languages and tooling your organization uses, aiming to maximize the semantic quality and verifiability will yield positive results”.

He closed his talk with a picture that should us remind to the high costs of bugs in production:

conclusion

His TR16 slides are available here: Towards a LangSec-aware SDLC

See you at TROOPERS17!

Sven

Leave a Reply

Your email address will not be published. Required fields are marked *