- Dec 08, 2017
-
-
Paul Sokolovsky authored
Or it will be truncated on a 64-bit platform.
-
Paul Sokolovsky authored
To allow easier override it for custom tracing.
-
- Dec 07, 2017
-
-
Paul Sokolovsky authored
Accessing them will crash immediately instead still working for some time, until overwritten by some other data, leading to much less deterministic crashes.
-
- Nov 29, 2017
-
-
Damien George authored
These checks are assumed to be true in all cases where gc_realloc is called with a valid pointer, so no need to waste code space and time checking them in a non-debug build.
-
- Oct 04, 2017
-
-
Damien George authored
Header files that are considered internal to the py core and should not normally be included directly are: py/nlr.h - internal nlr configuration and declarations py/bc0.h - contains bytecode macro definitions py/runtime0.h - contains basic runtime enums Instead, the top-level header files to include are one of: py/obj.h - includes runtime0.h and defines everything to use the mp_obj_t type py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h, and defines everything to use the general runtime support functions Additional, specific headers (eg py/objlist.h) can be included if needed.
-
- Aug 15, 2017
-
-
Stefan Naumann authored
It enables all the DEBUG_printf outputs in the py/ source code.
-
- Jul 31, 2017
-
-
Alexander Steffen authored
There were several different spellings of MicroPython present in comments, when there should be only one.
-
- Jul 12, 2017
-
-
Damien George authored
gc_free() expects either NULL or a valid pointer into the heap, so the checks for a valid pointer can be turned into assertions.
-
- Apr 12, 2017
-
-
Damien George authored
If a finaliser raises an exception then it must not propagate through the GC sweep function. This patch protects against such a thing by running finaliser code via the mp_call_function_1_protected call. This patch also adds scheduler lock/unlock calls around the finaliser execution to further protect against any possible reentrancy issues: the memory manager is already locked when doing a collection, but we also don't want to allow any scheduled code to run, KeyboardInterrupts to interupt the code, nor threads to switch.
-
- Aug 26, 2016
-
-
Damien George authored
There can be stray pointers in memory blocks that are not properly zero'd after allocation. This patch adds a new config option to always zero all allocated memory (via gc_alloc and gc_realloc) and hence help to eliminate stray pointers. See issue #2195.
-
- Jul 20, 2016
-
-
Paul Sokolovsky authored
Currently, MicroPython runs GC when it could not allocate a block of memory, which happens when heap is exhausted. However, that policy can't work well with "inifinity" heaps, e.g. backed by a virtual memory - there will be a lot of swap thrashing long before VM will be exhausted. Instead, in such cases "allocation threshold" policy is used: a GC is run after some number of allocations have been made. Details vary, for example, number or total amount of allocations can be used, threshold may be self-adjusting based on GC outcome, etc. This change implements a simple variant of such policy for MicroPython. Amount of allocated memory so far is used for threshold, to make it useful to typical finite-size, and small, heaps as used with MicroPython ports. And such GC policy is indeed useful for such types of heaps too, as it allows to better control fragmentation. For example, if a threshold is set to half size of heap, then for an application which usually makes big number of small allocations, that will (try to) keep half of heap memory in a nice defragmented state for an occasional large allocation. For an application which doesn't exhibit such behavior, there won't be any visible effects, except for GC running more frequently, which however may affect performance. To address this, the GC threshold is configurable, and by default is off so far. It's configured with gc.threshold(amount_in_bytes) call (can be queries without an argument).
-
- Jun 30, 2016
-
-
Paul Sokolovsky authored
Just as maximum allocated block size, it's reported in allocation units (not bytes).
-
Paul Sokolovsky authored
Previously, if there was chain of allocated blocks ending with the last block of heap, it wasn't included in number of 1/2-block or max block size stats.
-
- Jun 28, 2016
-
-
Damien George authored
There is no need since the GIL already makes gc and qstr operations atomic.
-
Damien George authored
GC_EXIT() can cause a pending thread (waiting on the mutex) to be scheduled right away. This other thread may trigger a garbage collection. If the pointer to the newly-allocated block (allocated by the original thread) is not computed before the switch (so it's just left as a block number) then the block will be wrongly reclaimed. This patch makes sure the pointer is computed before allowing any thread switch to occur.
-
Damien George authored
-
Damien George authored
By using a single, global mutex, all memory-related functions (alloc, free, realloc, collect, etc) are made thread safe. This means that only one thread can be in such a function at any one time.
-
Damien George authored
-
- May 12, 2016
-
-
Paul Sokolovsky authored
Address printed was truncated anyway and in general confusing to outsider. A line which dumps it is still left in the source, commented, for peculiar cases when it may be needed (e.g. when running under debugger).
-
Paul Sokolovsky authored
'=' is pretty natural character for tail, and gives less dense picture where it's easier to see what object types are actually there.
-
- May 11, 2016
-
-
Paul Sokolovsky authored
-
Paul Sokolovsky authored
These are typical consumers of large chunks of memory, so it's useful to see at least their number (how much memory isn't clearly shown, as the data for these objects is allocated elsewhere).
-
- Dec 27, 2015
-
-
Paul Sokolovsky authored
Previously, mark operation weren't logged at all, while it's quite useful to see cascade of marks in case of over-marking (and in other cases too). Previously, sweep was logged for each block of object in memory, but that doesn't make much sense and just lead to longer output, harder to parse by a human. Instead, log sweep only once per object. This is similar to other memory manager operations, e.g. an object is allocated, then freed. Or object is allocated, then marked, otherwise swept (one log entry per operation, with the same memory address in each case).
-
- Dec 18, 2015
-
-
Damien George authored
Ideally we'd use %zu for size_t args, but that's unlikely to be supported by all runtimes, and we would then need to implement it in mp_printf. So simplest and most portable option is to use %u and cast the argument to uint(=unsigned int). Note: reason for the change is that UINT_FMT can be %llu (size suitable for mp_uint_t) which is wider than size_t and prints incorrect results.
-
- Dec 17, 2015
-
-
Damien George authored
size_t is the correct type to use to count things related to the size of the address space. Using size_t (instead of mp_uint_t) is important for the efficiency of ports that configure mp_uint_t to larger than the machine word size.
-
Damien George authored
-
Damien George authored
The GC should search for pointers within the heap. This patch makes a difference when an object is larger than a pointer (eg 64-bit NaN boxing).
-
- Dec 02, 2015
-
-
Paul Sokolovsky authored
-
- Nov 29, 2015
-
-
Damien George authored
This allows the mp_obj_t type to be configured to something other than a pointer-sized primitive type. This patch also includes additional changes to allow the code to compile when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of mp_uint_t, and various casts.
-
Damien George authored
The GC works with concrete pointers and so the types should reflect this.
-
- Nov 07, 2015
-
-
Dave Hylands authored
Currently, the only place that clears the bit is in gc_collect. So if a block with a finalizer is allocated, and subsequently freed, and then the block is reallocated with no finalizer then the bit remains set. This could also be fixed by having gc_alloc clear the bit, but I'm pretty sure that free is called way less than alloc, so doing it in free is more efficient.
-
- Sep 04, 2015
-
-
Damien George authored
-
- Jul 14, 2015
-
-
Damien George authored
Previous to this patch all interned strings lived in their own malloc'd chunk. On average this wastes N/2 bytes per interned string, where N is the number-of-bytes for a quanta of the memory allocator (16 bytes on 32 bit archs). With this patch interned strings are concatenated into the same malloc'd chunk when possible. Such chunks are enlarged inplace when possible, and shrunk to fit when a new chunk is needed. RAM savings with this patch are highly varied, but should always show an improvement (unless only 3 or 4 strings are interned). New version typically uses about 70% of previous memory for the qstr data, and can lead to savings of around 10% of total memory footprint of a running script. Costs about 120 bytes code size on Thumb2 archs (depends on how many calls to gc_realloc are made).
-
- Apr 16, 2015
-
-
Damien George authored
-
- Apr 03, 2015
-
-
Damien George authored
-
- Feb 07, 2015
-
-
Damien George authored
Without mp_sys_path and mp_sys_argv in the root pointer section of the state, their memory was being incorrectly collected by GC.
-
- Jan 12, 2015
-
-
Damien George authored
-
- Jan 11, 2015
-
-
Damien George authored
-
- Jan 08, 2015
-
-
stijn authored
GC for unix/windows builds doesn't make use of the bss section anymore, so we do not need the (sometimes complicated) build features and code related to it
-
- Jan 07, 2015
-
-
Damien George authored
This patch consolidates all global variables in py/ core into one place, in a global structure. Root pointers are all located together to make GC tracing easier and more efficient.
-