Dylan Falconer

Join 500+ game programmers getting weekly tips.

Memory management is too hard for you - Programming Lies

From the Ivory Tower, the Street seems too clumsy to juggle ideas.

Sep 07, 2023

Likes: 13

If you hang around in the online spaces where programmers congregate for even a short time, you will hear a particular sentiment about manual memory management echoed by most. That is, “manual memory management is too difficult for mere mortals; it’s too error prone, and even the greybeards get it wrong”.

There are multi-decade projects built specifically to obscure the memory from the programmer - most commonly used programming languages use garbage collection rather than allow the programmer direct control of memory allocations. It’s understandable that the first instinct is to disallow certain actions - similar to how when there are small children in the house you would lock away the knives and cover the power points.

However, unlike with children, programmers are told not to grow and learn to take their own risks - they are kept wrapped in bubble wrap, unable to hurt themselves, severely limiting their movement and retarding their development.

The Teaching Problem

When learning about data structures such as linked lists, you were probably taught to allocate and free using malloc and free directly. Whether learning from a textbook or online, the examples commonly found are something akin to this:

Node* push(LinkedList* list, int data) {
    Node* new_node = (Node*)malloc(sizeof(Node));
    new_node->data = new_data;
    new_node->next = (list->head);
    list->head = new_node;
    return new_node;
}

malloc is seen as a mystical operation that provides memory when either the stack won’t suffice or control over the memory’s lifetime is required. Depending on the path you have taken on your programming journey, this may be the extent of your understanding.

As with all things, there are trade-offs to this way of teaching. On the one hand, it’s easier to teach about the data structure itself without delving into the intricacies of how malloc works or considering alternative memory allocation methods. On the other hand, linked lists are often criticised as slow and fragmented because proper memory allocation wasn’t utilised from the start. Ironically, malloc/free’s internal implementation uses a linked list to denote free blocks of memory.

Carrying It Forward

Due to the aforementioned learning process, many programmers reach for malloc early and often. This leads to bugs as the lifetimes of data are ignored or thought of in fragmented ways. Human error is a reasonable assumption when creating complex systems. As a result, manual memory management is blamed, rather than the lack of any process that would mitigate the bugs.

Enter garbage collection, smart pointers, reference counting, etc. - all designed to prevent programmers shooting themselves in the foot. And, all with overhead that means more CPU cycles used, which means more money spent and more electricity wasted.

Garbage Collection

The two reasons most often given for using a garbage collected language are: program safety and programmer productivity. One of these claims is false and the other is hard to quantify and also probably false.

Program Safety

One commonly cited program safety issue that arises with manual memory management is dangling pointers - they are pointers that no longer point to valid memory blocks.

There are a few ways to create a dangling pointer, but for the sake of simplicity, here is a classic use after free error.

#include <stdio.h>
#include <stdlib.h>

int main() {
    int* ptr = (int *)malloc(sizeof(int));
    *ptr = 69;
    free(ptr); // Memory is deallocated here.
    *ptr = 420; // Use after free error: accessing memory that has been deallocated.
    printf("value: %d\n", *ptr);
    return 0;
}

The thing that makes this type of bug so insidious is that it’s not a compile error - it’s undefined behaviour, and that means it’s up to the compiler implementation to decide what happens. You will only get a warning if you have the correct warning level turned on for your C compiler. In this case, using GCC 12.1, the program works as expected, though it’s not guaranteed to do so.

You may be wondering why I’m explaining dangling pointers, their insidiousness, and then still recommending manual memory management. It’s because GC languages have exactly the same problems, just inversed.

In a GC language, rather than having a dangling pointer, you can easily create a memory leak using the very tool that is meant to replace malloc and make it safe - new.

var myThing = new MyThing();
myList.Add(myThing);

You as the programmer can easily spot the potential issue in this case - it’s contrived, after all.

The point here is not that you can create other types of memory issues using GC’d languages, but that you are simply trading thinking about malloc/free for thinking about object references instead.

Let’s just show one more simple C# example to see how easy it is to make a memory leak.

public class MyThing {
    public MyThing() {
        GlobalEvents.SomeEvent += HandleEvent;
    }

    private void HandleEvent(object sender, EventArgs e) {
        // Do something.
    }
}

In the above code, GlobalEvents.SomeEvent holds a reference to each instance of MyThing until unsubscribed. This is, once again, the same type of cognitive burden that causes mistakes as malloc/free.

Programmer Productivity

Programmer productivity seems to be almost impossible to quantify properly. It’s certainly not X lines of code written in Y time, commit frequency, or test coverage. Perhaps something like “frequency of difficult problems solved with robust solutions” - but even then, how do we score different problems?

Rather than trying to quantify what makes programmers productive, we can simply compare garbage collection vs manual memory mental overhead and call-site code requirements.

As stated above, the GC promise of not having to worry about memory is not true in practice. The programmer must still consider object lifetimes and keep track of object references. If the argument is that manual memory allocation takes more lines of code, that may be true in the naive case - where each allocation has a free, but the argument falls apart once custom memory allocators are involved.

The Solution: Memory Allocators

Memory allocators allow programmers to group together object lifetimes and think about them much more easily than the usual malloc/free or new.

Rather than tracking individual object lifetimes in your head, you can decide which one of your memory allocators you’d like to use up-front and then not have to worry about it. This is the panacea to the fragmented way of thinking we have been discussing thus far.

The Arena Allocator

The arena allocator is an extremely simple and powerful tool to have in your arsenal. The way it works is memory is allocated sequentially and must be freed all at once.

The implementation of an arena allocator can be as simple as:

size_t current_position = 0;
size_t arena_size = 4096;
u8 *arena_memory = malloc(arena_size);

u8 *arena_alloc(size_t bytes) {
    if (current_position + bytes <= arena_size) {
        u8 *ptr = &arena_memory[current_position];
        current_position += bytes;
        return ptr;
    }
    return 0;
}

void arena_free() {
    current_position = 0;
}

This implementation will not do for any serious use-cases, but the concept remains the same.

With this one simple tool, the programmer is enabled to allocate a variety of objects with entirely different types and sizes, but the same lifetime. This is much more akin to how we actually use memory in the first place.

During the initialisation phase of your program, you can allocate permanent resources using your permanent arena allocator that you never have to free or worry about. Then, each update of your program, you may want to construct some strings and log them to a file - you can have an arena that gets cleared at the end of each update cycle.

The lifetimes of your objects are now dependent on the memory allocator you decide to use and the burden of keeping individual lifetimes in the programmer’s head is lifted.

Memory allocation techniques are often covered in advanced C/C++, systems, or OS programming materials, so many programmers are unfamiliar with them.

I believe that if this concept was taught much earlier in the programming journey, it would lead to better quality software and less confusion about how memory works.

Conclusion

The main challenges arising from manual memory management stem from how programming is taught, leading many programmers to find it difficult.

Garbage collection, though not a problem in and of itself, is lauded as the solution to memory bugs. However, it fails to live up to that reputation and obfuscates another class of memory bugs by default.

Custom memory allocators can be simple to implement, easy to understand, and have no hidden functionality.

Understanding memory management is essential to creating efficient and robust software. The hardware does not disappear when we try to abstract it away.

Knowing just a little bit more about how the hardware works can save CPU cycles, reducing costs and conserving energy.

Whether you are a novice or experienced, I recommend looking into custom memory allocators if you haven’t already - they are a game-changer.

Continual learning is part of the programming journey - embrace it. Every year I feel like I was a fool the year before.

I hope that in the future we will teach memory management the right way sooner. New systems languages like Zig and Odin have incorporated memory allocators as core features - this is a step in the right direction.

Golang recently added an arena allocator that works in tandem with its GC. I’m quite interested to see how that pans out.

If you’d like to learn more about arenas, I recommend these great resources: gingerBill, Ryan Fleury.

If you enjoyed this post, please consider joining the mailing list. Thank you for reading!


← Articles

Join 500+ game programmers getting weekly tips.