Skip to content
Snippets Groups Projects
Select Git revision
  • e92c9aa9c94eae7971c8e82f4e875fc53ef52f07
  • master default protected
2 results

tests

  • Clone with SSH
  • Clone with HTTPS
  • user avatar
    Damien George authored
    This benchmarking test suite is intended to be run on any MicroPython
    target.  As such all tests are parameterised with N and M: N is the
    approximate CPU frequency (in MHz) of the target and M is the approximate
    amount of heap memory (in kbytes) available on the target.  When running
    the benchmark suite these parameters must be specified and then each test
    is tuned to run on that target in a reasonable time (<1 second).
    
    The test scripts are not standalone: they require adding some extra code at
    the end to run the test with the appropriate parameters.  This is done
    automatically by the run-perfbench.py script, in such a way that imports
    are minimised (so the tests can be run on targets without filesystem
    support).
    
    To interface with the benchmarking framework, each test provides a
    bm_params dict and a bm_setup function, with the later taking a set of
    parameters (chosen based on N, M) and returning a pair of functions, one to
    run the test and one to get the results.
    
    When running the test the number of microseconds taken by the test are
    recorded.  Then this is converted into a benchmark score by inverting it
    (so higher number is faster) and normalising it with an appropriate factor
    (based roughly on the amount of work done by the test, eg number of
    iterations).
    
    Test outputs are also compared against a "truth" value, computed by running
    the test with CPython.  This provides a basic way of making sure the test
    actually ran correctly.
    
    Each test is run multiple times and the results averaged and standard
    deviation computed.  This is output as a summary of the test.
    
    To make comparisons of performance across different runs the
    run-perfbench.py script also includes a diff mode that reads in the output
    of two previous runs and computes the difference in performance.  Reports
    are given as a percentage change in performance with a combined standard
    deviation to give an indication if the noise in the benchmarking is less
    than the thing that is being measured.
    
    Example invocations for PC, pyboard and esp8266 targets respectively:
    
        $ ./run-perfbench.py 1000 1000
        $ ./run-perfbench.py --pyboard 100 100
        $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
    e92c9aa9
    History
    This directory contains tests for various functionality areas of MicroPython.
    To run all stable tests, run "run-tests" script in this directory.
    
    Tests of capabilities not supported on all platforms should be written
    to check for the capability being present. If it is not, the test
    should merely output 'SKIP' followed by the line terminator, and call
    sys.exit() to raise SystemExit, instead of attempting to test the
    missing capability. The testing framework (run-tests in this
    directory, test_main.c in qemu_arm) recognizes this as a skipped test.
    
    There are a few features for which this mechanism cannot be used to
    condition a test. The run-tests script uses small scripts in the
    feature_check directory to check whether each such feature is present,
    and skips the relevant tests if not.
    
    When creating new tests, anything that relies on float support should go in the
    float/ subdirectory.  Anything that relies on import x, where x is not a built-in
    module, should go in the import/ subdirectory.