Low-level languages such as C or C++ often require, or at the very least allow, developers to perform fine-grained available memory management. Even experienced professionals may get bitten by memory-related errors, simply because they are often hard to spot and might trigger unpredictably at run time. Hereafter, we’re referring to memory management as the broad set of operations that involve handling available memory at the byte level.
This category includes several bugs resulting from inappropriate actions, among which are:
- using a previously deallocated memory region;
- forgetting to deallocate some memory region;
- accessing data outside the bounds of an allocated buffer;
- dereferencing a
The most common and well-known impact is probably the Stack Overflow where data, possibly coming from an untrusted source, is copied in a buffer that resides on the program stack. The lack of bounds determining checks may result in code being written outside the designated area, modifying other elements in the stack, including local variables and the return pointer. In particular, controlling the return pointer means controlling the program flow after the execution of the current function, thus enabling the attacker to reach unexpected code regions that ultimately may lead to the execution of arbitrary code.
Whether it be the size and complexity of the application creating confusion or merely a moment of forgetfulness or distraction on behalf of the developer, these bugs continue to creep into production, even though the stated developer is likely aware of what needs to be done. For example, the inadvertent opening of access to bytes outside of a memory region is often caused by the failure to correctly implement offset-computation logic.
The impact depends on the kind of bug, the work performed by the application, the attack surface exposed to the attacker, and a considerable number of other factors that make a successful binary exploitation an often strenuous endeavour. From an attacker’s perspective, the impact ranges from a simple perturbation of the program flow in a way that is hardly controllable, to a program crash, up to data exfiltration or arbitrary code execution.
There is no silver bullet to deal with these problems. The most effective weapon that developers have in their arsenal is to strictly adhere to safe coding practices, such as code modularization that allows the confinement of a sensible portion of the program to a small library whereupon abstractions can be built. Also, adopting a robust approach in the face of foreseeably trying circumstances is perennially good advice. For example, even though the program logic should ensure that the
free function is called just once on a particular pointer, it might be wise to set the before-mentioned pointer to
NULL afterwards, so that a second invocation does not crash the program.
In addition to that, there are many tools that can help the developers spot these bugs before releasing the software, and an appropriate selection of these should be included throughout the development life cycle of the application. Certain compiler flags, such as the
-fsanitize option, may help to catch a number of memory violations and leaks in the early testing phase. Also, tools like
valgrind are able to run the program in a controlled environment wherein such problems are logged during the runtime.
Another increasingly important tool is a fuzzer, which tests the program or just a subset against some random, invalid or unexpected input to stress-test the application. Fuzzers push the limits so much so that even fringe cases, which are unlikely to occur during normal usage but may be wielded by a malicious user, emerge. Courtesy of exceptional projects like
libfuzzer, fuzzers are now often a staple ingredient in Continuous Integration (CI) test suites.
Verify that the application uses memory-safe string, safer memory copy and pointer arithmetic to detect or prevent stack, buffer, or heap overflows. Verify that input validation techniques are used to prevent overflows.
- OWASP ASVS: 5.4