I decided to try my hand at making my code a bit safer as far as managing weak references goes, so I tried to make something to track references to memory locations. I soon thought that it was a stupid idea and simplified it so that it was simply the memory locations' validity which were being monitored. That way, all I needed to do was essentially a custom null check to see if my weak references were still valid. I tried to do this in the simplest and most aesthetically pleasing way as possible. Naturally, that means (to me) attempting to overload the new and delete keywords for every possible type. Well, since you can't do that, I decided to use macros :)
I also came up with two seperate static classes which are only really different in that they use two different data structures.
This one represents my first iteration on this idea. Using a set, it simply keeps track of which memory locations are still not freed. I although the next iteration is more useful, I kept this one around just because of the slight speed compromise.
Using a map, I stored a pointer to the address as the key and the number of bytes to be stored as the value. This way, I could ALSO perform a sizeof on any dynamically allocated arrays, and expanding on that, an element count as well! I am happy with the results and think that I will likely be using MapMemMon all the time in the future.
So for performance testing, among other stress tests, I basically just did a bunch of new array and delete array calls in a loop and compared the clock times with regular new[] and delete[] calls. I would say I am happy enough with the results, given the overhead. Should there be any curiosity, the following is the average of my results for a run of speedtest.cpp:
- Regular new[] and delete[]: 160 clicks for 1000 allocated and deleted Foo arrays of size 1000
- SetMemMon new[] and delete[]: 370 clicks for 1000 allocated and deleted Foo arrays of size 1000
- MapMemMon new[] and delete[]: 580 clicks for 1000 allocated and deleted Foo arrays of size 1000