If we must choose a particularly damaging vulnerability, it would most likely be arbitrary code execution, and even more so if it can be exploited remotely. In the first blog entry we introduced the issues that can be caused by a poorly managed memory. The second one was on double free. Now we are going to see more examples.
Dangling Pointers
Manual memory management is complex so attention must be paid to the order of operations, where resources are obtained from and where we stop using them in order to free them under good conditions.
It also requires tracking copies of pointers or references that, if freed too early, may cause pointers to become “dangling”. That is, making use of a resource that has already been freed. Let’s see an example:

Let’s run:

This leaves us with a pointer pointing to a memory area (heap) that is not valid (note that it does not print anything after “(p2) apunta a…”. Moreover, there is no way to know if a resource whose address has been copied is still valid, just as it is not possible to recover a memory leak if its reference is lost (we will see this later).

To tag that a pointer is not valid, we assign the NULL macro to that pointer (in “modern” C++ we would assign nullptr) to somehow warn that it is not pointing at anything. But if that NULL isn’t verified, this is useless. Therefore, for every pointer using a resource, its “non NULLity” must be verified.
The good practice is, therefore: once we free up memory, we assign NULL or nullptr (in C++) to tag that the pointer is no longer pointing at anything valid. Also, before making use of it, both to copy it and to de-reference it, we must verify if it’s valid.
Memory Leaks
The opposite of using a memory area that is no longer valid is to have no pointer pointing at a valid memory area. Once the reference is lost, we can no longer free that reserved memory and it will occupy that space indefinitely until the program ends. This is a big issue if the program does not finish − such as a server that normally runs until the machine is shut down or some other unavoidable interruption occurs.
An example (if you want to replicate it, do it in a virtualised system for testing):

The code on the right gets parts of memory until all the heap memory is used up. This causes the system to run out of RAM, start swapping and finally the OOM-killer will kill the process for overconsuming memory.
What is the OOM-killer? It is a special kernel procedure (on Linux systems) to end processes in memory so that the system is not destabilised. In the screenshot we can see the output of the command ‘dmesg’, where the kill of our process is showed due to the cost of resources it represents to the system.

If we analyse the code, we see that we get into an endless loop where memory is reserved and the same pointer is reallocated to new blocks of that memory. Previous references are not freed and are lost, which triggers a relentless memory leak (exactly like a burst pipe) that ends drastically.
This is obviously a dramatization of what would happen in a real program, but actually it occurs that way. The issue is that the reserved memory is not controlled at a point, so lost references are accumulated, and it ends up becoming a problem. It is possible that in applications with memory leaks that we only use for a few hours, we only notice a slowdown (this was more evident in times when the RAM was more limited) or a memory buildup. However, regarding servers the issue commonly leads to service drop.
In the next post we will see the use of uninitialized memory.
Don’t forget to read the previous entries of this post: