[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [E-devel] Memory pool management



On 3/21/06, Cedric BAIL <cedric.bail@free.fr> wrote:
>
>         I have already used the same approch in anoser project that heavily used
> malloc (an assembler, http://savannah.nongnu.org/projects/aasm/) and the win
> was quite impressive compared to malloc (around 10 times faster). But this
> was a particular case, and will not really apply to enlightenment I believe.
>         I will take a look tomorrow on how to provide some benchmark on this.

Thanks for the reply, I hadn't looked at the code yet, but this info
will be helpful reviewing it.

> Well it didn't really work in the same way. g_slice is working like malloc or
> free. It didn't have any information from the application to help him
> allocate objects used at the same time in the same memory area. The current
> design of ememoa provide a pool per objects class. So every time we ask a new
> object, we will be near the other objects of the same type. The allocation is
> certainly fast (no use of any system call most of the time, use of bitmask
> for searching empty slot), but more important all objects of the same type
> will have a lot of chance to be in the same memory area. It also provide the
> ability to do some garbage collection and to destroy an entire pool in one
> function call.

Actually, this is quite similar to how malloc/free and g_slice work
internally. While they don't have application hints on the type of the
allocation, they round each allocation to the nearest power of two
(approximately, some implementations vary on this point), and keep
pools of memory of each size available for reallocation. So if you
were to perform a series of alloc/free/alloc's of the same sizes, you
should, in theory, get the first allocation back on the next
allocation after the free. This code segment demonstrates that
behavior, and puts a buffer after the first malloc to avoid heap
resizing causing a false positive.

#include <stdio.h>
#include <stdlib.h>

#define STRSIZE 5

int main()
{
        char *check;
        char *buffer;

        check = malloc(STRSIZE);

        printf("First alloc: %p\n", check);

        buffer = malloc(STRSIZE * 2);

        free(check);
        check = malloc(STRSIZE + 1);
        printf("Second alloc: %p\n", check);

        return 0;
}

On my system the output is:
First alloc: 0x300140
Second alloc: 0x300140

While it doesn't group objects of the same type in the same area, in
most of the EFL, this has little impact as each structure needs to be
reinitialized completely anyways. The only benefit would be if the
objects were so small that objects likely to be used soon end up in
the same cache line.

In your case, there is definitely potential for a significant
improvement because of the optional locking. IIRC, this is one place
that g_slice gets its advantage over malloc, because the allocation
routines don't incur as much locking overhead through the use of
bitmasks (as you mentioned).

Looking forward to your benchmarks.

Thanks,
Nathan