Skip to content
Snippets Groups Projects
  1. Dec 08, 2017
  2. Dec 07, 2017
  3. Nov 29, 2017
  4. Oct 04, 2017
    • Damien George's avatar
      all: Remove inclusion of internal py header files. · a3dc1b19
      Damien George authored
      Header files that are considered internal to the py core and should not
      normally be included directly are:
          py/nlr.h - internal nlr configuration and declarations
          py/bc0.h - contains bytecode macro definitions
          py/runtime0.h - contains basic runtime enums
      
      Instead, the top-level header files to include are one of:
          py/obj.h - includes runtime0.h and defines everything to use the
              mp_obj_t type
          py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
              and defines everything to use the general runtime support functions
      
      Additional, specific headers (eg py/objlist.h) can be included if needed.
      a3dc1b19
  5. Aug 15, 2017
  6. Jul 31, 2017
  7. Jul 12, 2017
  8. Apr 12, 2017
    • Damien George's avatar
      py/gc: Execute finaliser code in a protected environment. · c7e8c6f7
      Damien George authored
      If a finaliser raises an exception then it must not propagate through the
      GC sweep function.  This patch protects against such a thing by running
      finaliser code via the mp_call_function_1_protected call.
      
      This patch also adds scheduler lock/unlock calls around the finaliser
      execution to further protect against any possible reentrancy issues: the
      memory manager is already locked when doing a collection, but we also don't
      want to allow any scheduled code to run, KeyboardInterrupts to interupt the
      code, nor threads to switch.
      c7e8c6f7
  9. Aug 26, 2016
  10. Jul 20, 2016
    • Paul Sokolovsky's avatar
      py/gc: Implement GC running by allocation threshold. · 93e353e3
      Paul Sokolovsky authored
      Currently, MicroPython runs GC when it could not allocate a block of memory,
      which happens when heap is exhausted. However, that policy can't work well
      with "inifinity" heaps, e.g. backed by a virtual memory - there will be a
      lot of swap thrashing long before VM will be exhausted. Instead, in such
      cases "allocation threshold" policy is used: a GC is run after some number of
      allocations have been made. Details vary, for example, number or total amount
      of allocations can be used, threshold may be self-adjusting based on GC
      outcome, etc.
      
      This change implements a simple variant of such policy for MicroPython. Amount
      of allocated memory so far is used for threshold, to make it useful to typical
      finite-size, and small, heaps as used with MicroPython ports. And such GC policy
      is indeed useful for such types of heaps too, as it allows to better control
      fragmentation. For example, if a threshold is set to half size of heap, then
      for an application which usually makes big number of small allocations, that
      will (try to) keep half of heap memory in a nice defragmented state for an
      occasional large allocation.
      
      For an application which doesn't exhibit such behavior, there won't be any
      visible effects, except for GC running more frequently, which however may
      affect performance. To address this, the GC threshold is configurable, and
      by default is off so far. It's configured with gc.threshold(amount_in_bytes)
      call (can be queries without an argument).
      93e353e3
  11. Jun 30, 2016
  12. Jun 28, 2016
  13. May 12, 2016
  14. May 11, 2016
  15. Dec 27, 2015
    • Paul Sokolovsky's avatar
      py/gc: Improve mark/sweep debug output. · 3ea03a11
      Paul Sokolovsky authored
      Previously, mark operation weren't logged at all, while it's quite useful
      to see cascade of marks in case of over-marking (and in other cases too).
      Previously, sweep was logged for each block of object in memory, but that
      doesn't make much sense and just lead to longer output, harder to parse
      by a human. Instead, log sweep only once per object. This is similar to
      other memory manager operations, e.g. an object is allocated, then freed.
      Or object is allocated, then marked, otherwise swept (one log entry per
      operation, with the same memory address in each case).
      3ea03a11
  16. Dec 18, 2015
    • Damien George's avatar
      py/gc: When printing info, use %u instead of UINT_FMT for size_t args. · acaccb37
      Damien George authored
      Ideally we'd use %zu for size_t args, but that's unlikely to be supported
      by all runtimes, and we would then need to implement it in mp_printf.
      So simplest and most portable option is to use %u and cast the argument
      to uint(=unsigned int).
      
      Note: reason for the change is that UINT_FMT can be %llu (size suitable
      for mp_uint_t) which is wider than size_t and prints incorrect results.
      acaccb37
  17. Dec 17, 2015
  18. Dec 02, 2015
  19. Nov 29, 2015
  20. Nov 07, 2015
    • Dave Hylands's avatar
      py: Clear finalizer flag when calling gc_free. · 7f3c0d1e
      Dave Hylands authored
      Currently, the only place that clears the bit is in gc_collect.
      So if a block with a finalizer is allocated, and subsequently
      freed, and then the block is reallocated with no finalizer then
      the bit remains set.
      
      This could also be fixed by having gc_alloc clear the bit, but
      I'm pretty sure that free is called way less than alloc, so doing
      it in free is more efficient.
      7f3c0d1e
  21. Sep 04, 2015
  22. Jul 14, 2015
    • Damien George's avatar
      py: Improve allocation policy of qstr data. · ade9a052
      Damien George authored
      Previous to this patch all interned strings lived in their own malloc'd
      chunk.  On average this wastes N/2 bytes per interned string, where N is
      the number-of-bytes for a quanta of the memory allocator (16 bytes on 32
      bit archs).
      
      With this patch interned strings are concatenated into the same malloc'd
      chunk when possible.  Such chunks are enlarged inplace when possible,
      and shrunk to fit when a new chunk is needed.
      
      RAM savings with this patch are highly varied, but should always show an
      improvement (unless only 3 or 4 strings are interned).  New version
      typically uses about 70% of previous memory for the qstr data, and can
      lead to savings of around 10% of total memory footprint of a running
      script.
      
      Costs about 120 bytes code size on Thumb2 archs (depends on how many
      calls to gc_realloc are made).
      ade9a052
  23. Apr 16, 2015
  24. Apr 03, 2015
  25. Feb 07, 2015
  26. Jan 12, 2015
  27. Jan 11, 2015
  28. Jan 08, 2015
  29. Jan 07, 2015
Loading