Skip to content
Snippets Groups Projects
  1. May 17, 2019
  2. Dec 20, 2018
  3. Aug 14, 2018
    • Damien George's avatar
      py/gc: In gc_alloc, reset n_free var right before search for free mem. · 91041945
      Damien George authored
      Otherwise there is the possibility that n_free starts out non-zero from the
      previous iteration, which may have found a few (but not enough) free blocks
      at the end of the heap.  If this is the case, and if the very first blocks
      that are scanned the second time around (starting at
      gc_last_free_atb_index) are found to give enough memory (including the
      blocks at the end of the heap from the previous iteration that left n_free
      non-zero) then memory will be allocated starting before the location that
      gc_last_free_atb_index points to, most likely leading to corruption.
      
      This serious bug did not manifest itself in the past because a gc_collect
      always resets gc_last_free_atb_index to point to the start of the GC heap,
      and the first block there is almost always allocated to a long-lived
      object (eg entries from sys.path, or mounted filesystem objects), which
      means that n_free would be reset at the start of the search loop.
      
      But with threading enabled with the GIL disabled it is possible to trigger
      the bug via the following sequence of events:
      
      1. Thread A runs gc_alloc, fails to find enough memory, and has a non-zero
         n_free at the end of the search.
      2. Thread A calls gc_collect and frees a bunch of blocks on the GC heap.
      3. Just after gc_collect finishes in thread A, thread B takes gc_mutex and
         does an allocation, moving gc_last_free_atb_index to point to the
         interior of the heap, to a place where there is most likely a run of
         available blocks.
      4. Thread A regains gc_mutex and does its second search for free memory,
         starting with a non-zero n_free.  Since it's likely that the first block
         it searches is available it will allocate memory which overlaps with the
         memory before gc_last_free_atb_index.
      91041945
  4. Aug 02, 2018
  5. Jun 12, 2018
    • Damien George's avatar
      py/gc: Add gc_sweep_all() function to run all remaining finalisers. · 522ea80f
      Damien George authored
      This patch adds the gc_sweep_all() function which does a garbage collection
      without tracing any root pointers, so frees all the memory, and most
      importantly runs any remaining finalisers.
      
      This helps primarily for soft reset: it will close any open files, any open
      sockets, and help to get the system back to a clean state upon soft reset.
      522ea80f
  6. May 21, 2018
  7. May 13, 2018
    • Damien George's avatar
      py/mpstate.h: Adjust start of root pointer section to exclude non-ptrs. · 749b1617
      Damien George authored
      This patch moves the start of the root pointer section in mp_state_ctx_t
      so that it skips entries that are not pointers and don't need scanning.
      
      Previously, the start of the root pointer section was at the very beginning
      of the mp_state_ctx_t struct (which is the beginning of mp_state_thread_t).
      This was the original assembler version of the NLR code was hard-coded to
      have the nlr_top pointer at the start of this state structure.  But now
      that the NLR code is partially written in C there is no longer this
      restriction on the location of nlr_top (and a comment to this effect has
      been removed in this patch).
      
      So now the root pointer section starts part way through the
      mp_state_thread_t structure, after the entries which are not root pointers.
      
      This patch also moves the non-pointer entries for MICROPY_ENABLE_SCHEDULER
      outside the root pointer section.
      
      Moving non-pointer entries out of the root pointer section helps to make
      the GC more precise and should help to prevent some cases of collectable
      garbage being kept.
      
      This patch also has a measurable improvement in performance of the
      pystone.py benchmark: on unix x86-64 and stm32 there was an improvement of
      roughly 0.6% (tested with both gcc 7.3 and gcc 8.1).
      749b1617
  8. Feb 19, 2018
  9. Dec 11, 2017
    • Damien George's avatar
      py: Introduce a Python stack for scoped allocation. · 02d830c0
      Damien George authored
      This patch introduces the MICROPY_ENABLE_PYSTACK option (disabled by
      default) which enables a "Python stack" that allows to allocate and free
      memory in a scoped, or Last-In-First-Out (LIFO) way, similar to alloca().
      
      A new memory allocation API is introduced along with this Py-stack.  It
      includes both "local" and "nonlocal" LIFO allocation.  Local allocation is
      intended to be equivalent to using alloca(), whereby the same function must
      free the memory.  Nonlocal allocation is where another function may free
      the memory, so long as it's still LIFO.
      
      Follow-up patches will convert all uses of alloca() and VLA to the new
      scoped allocation API.  The old behaviour (using alloca()) will still be
      available, but when MICROPY_ENABLE_PYSTACK is enabled then alloca() is no
      longer required or used.
      
      The benefits of enabling this option are (or will be once subsequent
      patches are made to convert alloca()/VLA):
      - Toolchains without alloca() can use this feature to obtain correct and
        efficient scoped memory allocation (compared to using the heap instead
        of alloca(), which is slower).
      - Even if alloca() is available, enabling the Py-stack gives slightly more
        efficient use of stack space when calling nested Python functions, due to
        the way that compilers implement alloca().
      - Enabling the Py-stack with the stackless mode allows for even more
        efficient stack usage, as well as retaining high performance (because the
        heap is no longer used to build and destroy stackless code states).
      - With Py-stack and stackless enabled, Python-calling-Python is no longer
        recursive in the C mp_execute_bytecode function.
      
      The micropython.pystack_use() function is included to measure usage of the
      Python stack.
      02d830c0
  10. Dec 08, 2017
  11. Dec 07, 2017
  12. Nov 29, 2017
  13. Oct 04, 2017
    • Damien George's avatar
      all: Remove inclusion of internal py header files. · a3dc1b19
      Damien George authored
      Header files that are considered internal to the py core and should not
      normally be included directly are:
          py/nlr.h - internal nlr configuration and declarations
          py/bc0.h - contains bytecode macro definitions
          py/runtime0.h - contains basic runtime enums
      
      Instead, the top-level header files to include are one of:
          py/obj.h - includes runtime0.h and defines everything to use the
              mp_obj_t type
          py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
              and defines everything to use the general runtime support functions
      
      Additional, specific headers (eg py/objlist.h) can be included if needed.
      a3dc1b19
  14. Aug 15, 2017
  15. Jul 31, 2017
  16. Jul 12, 2017
  17. Apr 12, 2017
    • Damien George's avatar
      py/gc: Execute finaliser code in a protected environment. · c7e8c6f7
      Damien George authored
      If a finaliser raises an exception then it must not propagate through the
      GC sweep function.  This patch protects against such a thing by running
      finaliser code via the mp_call_function_1_protected call.
      
      This patch also adds scheduler lock/unlock calls around the finaliser
      execution to further protect against any possible reentrancy issues: the
      memory manager is already locked when doing a collection, but we also don't
      want to allow any scheduled code to run, KeyboardInterrupts to interupt the
      code, nor threads to switch.
      c7e8c6f7
  18. Aug 26, 2016
  19. Jul 20, 2016
    • Paul Sokolovsky's avatar
      py/gc: Implement GC running by allocation threshold. · 93e353e3
      Paul Sokolovsky authored
      Currently, MicroPython runs GC when it could not allocate a block of memory,
      which happens when heap is exhausted. However, that policy can't work well
      with "inifinity" heaps, e.g. backed by a virtual memory - there will be a
      lot of swap thrashing long before VM will be exhausted. Instead, in such
      cases "allocation threshold" policy is used: a GC is run after some number of
      allocations have been made. Details vary, for example, number or total amount
      of allocations can be used, threshold may be self-adjusting based on GC
      outcome, etc.
      
      This change implements a simple variant of such policy for MicroPython. Amount
      of allocated memory so far is used for threshold, to make it useful to typical
      finite-size, and small, heaps as used with MicroPython ports. And such GC policy
      is indeed useful for such types of heaps too, as it allows to better control
      fragmentation. For example, if a threshold is set to half size of heap, then
      for an application which usually makes big number of small allocations, that
      will (try to) keep half of heap memory in a nice defragmented state for an
      occasional large allocation.
      
      For an application which doesn't exhibit such behavior, there won't be any
      visible effects, except for GC running more frequently, which however may
      affect performance. To address this, the GC threshold is configurable, and
      by default is off so far. It's configured with gc.threshold(amount_in_bytes)
      call (can be queries without an argument).
      93e353e3
  20. Jun 30, 2016
  21. Jun 28, 2016
  22. May 12, 2016
  23. May 11, 2016
  24. Dec 27, 2015
    • Paul Sokolovsky's avatar
      py/gc: Improve mark/sweep debug output. · 3ea03a11
      Paul Sokolovsky authored
      Previously, mark operation weren't logged at all, while it's quite useful
      to see cascade of marks in case of over-marking (and in other cases too).
      Previously, sweep was logged for each block of object in memory, but that
      doesn't make much sense and just lead to longer output, harder to parse
      by a human. Instead, log sweep only once per object. This is similar to
      other memory manager operations, e.g. an object is allocated, then freed.
      Or object is allocated, then marked, otherwise swept (one log entry per
      operation, with the same memory address in each case).
      3ea03a11
  25. Dec 18, 2015
    • Damien George's avatar
      py/gc: When printing info, use %u instead of UINT_FMT for size_t args. · acaccb37
      Damien George authored
      Ideally we'd use %zu for size_t args, but that's unlikely to be supported
      by all runtimes, and we would then need to implement it in mp_printf.
      So simplest and most portable option is to use %u and cast the argument
      to uint(=unsigned int).
      
      Note: reason for the change is that UINT_FMT can be %llu (size suitable
      for mp_uint_t) which is wider than size_t and prints incorrect results.
      acaccb37
  26. Dec 17, 2015
  27. Dec 02, 2015
Loading