Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • card10/firmware
  • annejan/firmware
  • astro/firmware
  • fpletz/firmware
  • gerd/firmware
  • fleur/firmware
  • swym/firmware
  • l/firmware
  • uberardy/firmware
  • wink/firmware
  • madonius/firmware
  • mot/firmware
  • filid/firmware
  • q3k/firmware
  • hauke/firmware
  • Woazboat/firmware
  • pink/firmware
  • mossmann/firmware
  • omniskop/firmware
  • zenox/firmware
  • trilader/firmware
  • Danukeru/firmware
  • shoragan/firmware
  • zlatko/firmware
  • sistason/firmware
  • datenwolf/firmware
  • bene/firmware
  • amedee/firmware
  • martinling/firmware
  • griffon/firmware
  • chris007/firmware
  • adisbladis/firmware
  • dbrgn/firmware
  • jelly/firmware
  • rnestler/firmware
  • mh/firmware
  • ln/firmware
  • penguineer/firmware
  • monkeydom/firmware
  • jens/firmware
  • jnaulty/firmware
  • jeffmakes/firmware
  • marekventur/firmware
  • pete/firmware
  • h2obrain/firmware
  • DooMMasteR/firmware
  • jackie/firmware
  • prof_r/firmware
  • Draradech/firmware
  • Kartoffel/firmware
  • hinerk/firmware
  • abbradar/firmware
  • JustTB/firmware
  • LuKaRo/firmware
  • iggy/firmware
  • ente/firmware
  • flgr/firmware
  • Lorphos/firmware
  • matejo/firmware
  • ceddral7/firmware
  • danb/firmware
  • joshi/firmware
  • melle/firmware
  • fitch/firmware
  • deurknop/firmware
  • sargon/firmware
  • markus/firmware
  • kloenk/firmware
  • lucaswerkmeister/firmware
  • derf/firmware
  • meh/firmware
  • dx/card10-firmware
  • torben/firmware
  • yuvadm/firmware
  • AndyBS/firmware
  • klausdieter1/firmware
  • katzenparadoxon/firmware
  • xiretza/firmware
  • ole/firmware
  • techy/firmware
  • thor77/firmware
  • TilCreator/firmware
  • fuchsi/firmware
  • dos/firmware
  • yrlf/firmware
  • PetePriority/firmware
  • SuperVirus/firmware
  • sur5r/firmware
  • tazz/firmware
  • Alienmaster/firmware
  • flo_h/firmware
  • baldo/firmware
  • mmu_man/firmware
  • Foaly/firmware
  • sodoku/firmware
  • Guinness/firmware
  • ssp/firmware
  • led02/firmware
  • Stormwind/firmware
  • arist/firmware
  • coon/firmware
  • mdik/firmware
  • pippin/firmware
  • royrobotiks/firmware
  • zigot83/firmware
  • mo_k/firmware
106 results
Show changes
Showing
with 1458 additions and 0 deletions
.. _epicardium_api:
Epicardium API
==============
.. ifconfig:: not has_hawkmoth
.. warning::
This version of the documentation was built without `Hawkmoth`_ working
(most likely python3-clang was not installed). This means, no
documentation for the Epicardium API was generated.
.. _Hawkmoth: https://github.com/jnikula/hawkmoth
This page details all available *Epicardium API* functions. All these
functions are defined in
.. code-block:: c++
#include "epicardium.h"
.. c:autodoc:: epicardium/epicardium.h
.. _epicardium_internal_apis:
Epicardium Internal APIs
========================
Core OS APIs
------------
.. c:autodoc:: epicardium/os/core.h
Mutex
=====
Ontop of FreeRTOS, we have our own mutex implementation. **Never use the
FreeRTOS mutexes directly! Always use this abstraction layer instead**. This
mutex implementation tries to make reasoning about program flow and locking
behavior easier. And most importantly tries to help with debugging possible
dead-locks.
Design
------
There are a few guiding design principles:
- Mutexes can only be used from tasks, **never** from interrupts!
- Timers can use mutexes, but only with :c:func:`mutex_trylock`, **never** with
:c:func:`mutex_lock` (Because they are not allowed to block).
- Locking can *never* fail (if it does, we consider this a fatal error ⇒ panic).
- No recursive locking.
- An unlock can only occur from the task which previously acquired the mutex.
- An unlock is only allowed if the mutex was previously acquired.
For a more elaborate explanation of the rationale behind these rules take a
look at the :ref:`mutex-design-reasons`.
Definitions
-----------
.. c:autodoc:: epicardium/os/mutex.h
.. _mutex-design-reasons:
Reasons for this Design
-----------------------
Locking can *never* fail
^^^^^^^^^^^^^^^^^^^^^^^^
This might seem like a bold claim at first but in the end, it is just a matter
of definition and shifting responsibilities. Instead of requiring all code to
be robust against a locking attempt failing, we require all code to properly
lock and unlock their mutexes and thus never producing a situation where
locking would fail.
Because all code using any of the mutexes is contained in the Epicardium
code-base, we can - *hopefully* - audit it properly behaving ahead of time and
thus don't need to add code to ensure correctness at runtime. This makes
downstream code easier to read and easier to reason about.
History of this project has shown that most code does not properly deal with
locking failures anyway: There was code simply skipping the mutexed action on
failure, code blocking a module entirely until reboot, and worst of all: Code
exposing the locking failure to 'user-space' (Pycardium) instead of retrying.
This has lead to spurious errors where technically there would not need to be
any.
Only from tasks
^^^^^^^^^^^^^^^
Locking a mutex from an ISR, a FreeRTOS software timer or any other context
which does not allow blocking is complicated to do right. The biggest
difficulty is that a task might be holding the mutex during execution of such a
context and there is no way to wait for it to release the mutex. This requires
careful design of the program flow to choose an alternative option in such a
case. A common approach is to 'outsource' the relevant parts of the code into
an 'IRQ worker' which is essentially just a task waiting for the IRQ to wake it
up and then attempts to lock the mutex.
If you absolutely do need it (and for legacy reasons), software timers *can*
lock a mutex using :c:func:`mutex_trylock` (which never blocks). I strongly
recommend **not** doing that, though. As shown above, you will have to deal
with the case of the mutex being held by another task and it is very well
possible that your timer will get starved of the mutex because the scheduler
has no knowledge of its intentions. In most cases, it is a better idea to use
a task and attempt locking using :c:func:`mutex_lock`.
.. todo::
We might introduce a generic IRQ worker queue system at some point.
No recursive locking
^^^^^^^^^^^^^^^^^^^^
Recursive locking refers to the ability to 'reacquire' a mutex already held by
the current task, deeper down in the call-chain. Only the outermost unlock
will actually release the mutex. This feature is sometimes implemented to
allow more elegant abstractions where downstream code does not need to know
about the mutexes upstream code uses and can still also create a larger region
where the same mutex is held.
But exactly by hiding the locking done by a function, these abstractions make
it hard to trace locking chains and in some cases even make it impossible to
create provably correct behavior. As an alternative, I would suggest using
different mutexes for the different levels of abstraction. This also helps
keeping each mutex separated and 'local' to its purpose.
Only unlock from the acquiring task
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Because of the above mentioned mutex locking semantics, there should never be a
need to force-unlock a forgein mutex. Even in cases of failures, all code
should still properly release all mutexes it holds. One notable exceptions is
``panic()``\s which will abort all ongoing operations anyway.
Only unlock once after acquisition
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Justified with an argument of robustness, sometimes the :c:func:`mutex_unlock`
call is written in a way that allows unlocking an already unlocked mutex. But
robustness of downstream code will not really be improved by the upstream API
dealing with arguably invalid usage. For example, this could encourage
practices like unlocking everything again at the end of a function "just to be
sure".
Instead, code should be written in a way where the lock/unlock pair is
immediately recognizable as belonging together and is thus easily auditable to
have correct locking behavior. A common pattern to help with readability in
this regard is the *Single Function Exit* which looks like this:
.. code-block:: cpp
int function()
{
int ret;
mutex_lock(&some_mutex);
ret = foo();
if (ret) {
/* Return with an error code */
ret = -ENODEV;
goto out_unlock;
}
ret = bar();
if (ret) {
/* Return the return value from foo */
goto out_unlock;
}
ret = 0;
out_unlock:
mutex_unlock(&some_mutex);
return ret;
}
.. _epicardium_api_overview:
Overview
========
Epicardium, the "main" firmware running on core 0 (more about our dual
core design can be found :ref:`here <firmware_overview>`), exposes a lot of
functionality to user-code via the so-called *Epicardium API*. This API
consists of a number of calls that can be issued by core 1 using an
auto-generated library.
API Design
----------
.. note::
This is the current design. We might adjust it in the future to allow for
more performance in cases where this approach is too slow. This will most
likely be the addition of some form of asynchroneous calls.
The API is strictly synchroneous. This means, an API call looks exactly the
same as calling any other function. Internally, the call will wake up
Epicardium and there the call will be dispatched. It will then block until
Epicardium finished executing the call and return whatever the call has as a
return value. In code:
.. code-block:: c++
#include "epicardium.h"
int main(void)
{
/* ... */
/* Call the API to write to UART. */
epic_uart_write_str("Hello from core 1!\r\n", 20);
/*
* Call the API to receive a byte from UART.
* This will block until at least one character is received.
*/
char chr = epic_uart_read_chr();
/* ... */
}
Internals
---------
In most cases, you won't need to care about the actual implementation of the
API as it is well hidden inside the auto-generated library. If you want to
know anyway, here is a rough overview:
The current design is based around a shared memory region, the semaphore
peripherals provided by MAX32666, and the SEV/WFE mechanism. When issuing a
call, this is what happens:
.. image:: ../static/synchroneous-api-call.svg
There is one important thing to note here: Right now, the API is only ever
polled for new calls from Epicardium inside the idle task. This means, that if
it is busy with other things, the API will have the least priority. For now,
this has not been an issue as Epicardium is sleeping most of the time anyway.
If it becomes one in the future we will have to introduce another
synchronization point.
Sensor Streams
==============
Sensor drivers can make their data available to core 1 in a stream-like format.
This allows batch-reading many samples and shoud reduce pressure on the
Epicardium API this way. Sensor streams are read on core 1 using
:c:func:`epic_stream_read`.
This page intends to document how to add this stream interface to a sensor driver.
It also serves as a reference of existing streams. For that, take a look at the
definitions in the :c:type:`stream_descriptor` enum.
Adding a new Stream
-------------------
The list of possible sensor streams must be known at compile time. Each stream
gets a unique ID in the :c:type:`stream_descriptor` enum. Please do not assign
IDs manually but instead let the enum assign sequencial IDs. :c:macro:`SD_MAX`
must always be the highest stream ID. Additionally, please document what this
stream is for using a doc-comment so it shows up on this page.
When a sensor driver enables data collection, it should also register its
respective stream. This is done using a :c:type:`stream_info` object. Pass
this object to :c:func:`stream_register` to make your stream available. Your
driver must guarantee the :c:member:`stream_info.queue` handle to be valid until
deregistration using :c:func:`stream_deregister`.
Definitions
-----------
.. c:autodoc:: epicardium/modules/stream.h
Copyright (c) 2016-2017, Jani Nikula <jani@nikula.org>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
all:
$(MAKE) -C .. all
.DEFAULT:
$(MAKE) -C .. $@
# -*- makefile -*-
dir := hawkmoth
CLEAN := $(CLEAN) $(dir)/hawkmoth.pyc $(dir)/cautodoc.pyc $(dir)/__init__.pyc
Hawkmoth - Sphinx Autodoc for C
===============================
Hawkmoth is a minimalistic Sphinx_ `C Domain`_ autodoc directive extension to
incorporate formatted C source code comments written in reStructuredText_ into
Sphinx based documentation. It uses Clang Python Bindings for parsing, and
generates C Domain directives for C API documentation, and more. In short,
Hawkmoth is Sphinx Autodoc for C.
Hawkmoth aims to be a compelling alternative for documenting C projects using
Sphinx, mainly through its simplicity of design, implementation and use.
.. _Sphinx: http://www.sphinx-doc.org
.. _C Domain: http://www.sphinx-doc.org/en/stable/domains.html
.. _reStructuredText: http://docutils.sourceforge.net/rst.html
Example
-------
Given C source code with rather familiar looking documentation comments::
/**
* Get foo out of bar.
*/
void foobar();
and a directive in the Sphinx project::
.. c:autodoc:: filename.c
you can incorporate code documentation into Sphinx. It's as simple as that.
You can document functions, their parameters and return values, structs, unions,
their members, macros, function-like macros, enums, enumeration constants,
typedefs, variables, as well as have generic documentation comments not attached
to any symbols.
Documentation
-------------
Documentation on how to install and configure Hawkmoth, and write documentation
comments, with examples, is available in the ``doc`` directory in the source
tree, obviously in Sphinx format and using the directive extension. Pre-built
documentation `showcasing what Hawkmoth can do`_ is available at `Read the
Docs`_.
.. _showcasing what Hawkmoth can do: https://hawkmoth.readthedocs.io/en/latest/examples.html
.. _Read the Docs: https://hawkmoth.readthedocs.io/
Installation
------------
You can install Hawkmoth from PyPI_ with::
pip install hawkmoth
You'll additionally need to install Clang and Python 3 bindings for it through
your distro's package manager; they are not available via PyPI. For further
details, see the documentation.
Alternatively, installation packages are available for:
* `Arch Linux`_
In Sphinx ``conf.py``, add ``hawkmoth`` to ``extensions``, and point
``cautodoc_root`` at the source tree. See the extension documentation for
details.
.. _PyPI: https://pypi.org/project/hawkmoth/
.. _Arch Linux: https://aur.archlinux.org/packages/?K=hawkmoth
Development and Contributing
----------------------------
Hawkmoth source code is available on GitHub_. The development version can be
checked out via ``git`` using this command::
git clone https://github.com/jnikula/hawkmoth.git
Please file bugs and feature requests as GitHub issues. Contributions are
welcome both as emailed patches to the mailing list and as pull requests.
.. _GitHub: https://github.com/jnikula/hawkmoth
Dependencies
------------
- Python 3.4
- Sphinx 3
- Clang 6.0
- Python 3 Bindings for Clang 6.0
- sphinx-testing 1.0.0 (for development)
These are the versions Hawkmoth is currently being developed and tested
against. Other versions might work, but no guarantees.
License
-------
Hawkmoth is free software, released under the `2-Clause BSD License`_.
.. _2-Clause BSD License: https://opensource.org/licenses/BSD-2-Clause
Contact
-------
IRC channel ``#hawkmoth`` on freenode_.
Mailing list hawkmoth@freelists.org. Subscription information at the `list home
page`_.
.. _freenode: https://freenode.net/
.. _list home page: https://www.freelists.org/list/hawkmoth
0.7.0
# Copyright (c) 2016-2017, Jani Nikula <jani@nikula.org>
# Licensed under the terms of BSD 2-Clause, see LICENSE for details.
"""
Hawkmoth
========
Sphinx C Domain autodoc directive extension.
"""
import glob
import os
import re
import stat
import subprocess
import sys
from docutils import nodes, statemachine
from docutils.parsers.rst import directives, Directive
from docutils.statemachine import ViewList
from sphinx.util.nodes import nested_parse_with_titles
from sphinx.util.docutils import switch_source_input
from sphinx.util import logging
from hawkmoth.parser import parse, ErrorLevel
with open(os.path.join(os.path.abspath(os.path.dirname(__file__)),
'VERSION')) as version_file:
__version__ = version_file.read().strip()
class CAutoDocDirective(Directive):
"""Extract all documentation comments from the specified file"""
required_argument = 1
optional_arguments = 1
logger = logging.getLogger(__name__)
# Allow passing a variable number of file patterns as arguments
final_argument_whitespace = True
option_spec = {
'compat': directives.unchanged_required,
'clang': directives.unchanged_required,
}
has_content = False
# Map verbosity levels to logger levels.
_log_lvl = {ErrorLevel.ERROR: logging.LEVEL_NAMES['ERROR'],
ErrorLevel.WARNING: logging.LEVEL_NAMES['WARNING'],
ErrorLevel.INFO: logging.LEVEL_NAMES['INFO'],
ErrorLevel.DEBUG: logging.LEVEL_NAMES['DEBUG']}
def __display_parser_diagnostics(self, errors):
env = self.state.document.settings.env
for (severity, filename, lineno, msg) in errors:
if filename:
toprint = '{}:{}: {}'.format(filename, lineno, msg)
else:
toprint = '{}'.format(msg)
if severity.value <= env.app.verbosity:
self.logger.log(self._log_lvl[severity], toprint,
location=(env.docname, self.lineno))
def __parse(self, viewlist, filename):
env = self.state.document.settings.env
compat = self.options.get('compat', env.config.cautodoc_compat)
clang = self.options.get('clang', env.config.cautodoc_clang)
comments, errors = parse(filename, compat=compat, clang=clang)
self.__display_parser_diagnostics(errors)
for (comment, meta) in comments:
lineoffset = meta['line'] - 1
lines = statemachine.string2lines(comment, 8,
convert_whitespace=True)
for line in lines:
viewlist.append(line, filename, lineoffset)
lineoffset += 1
def run(self):
env = self.state.document.settings.env
result = ViewList()
for pattern in self.arguments[0].split():
filenames = glob.glob(env.config.cautodoc_root + '/' + pattern)
if len(filenames) == 0:
fmt = 'Pattern "{pat}" does not match any files.'
self.logger.warning(fmt.format(pat=pattern),
location=(env.docname, self.lineno))
continue
for filename in filenames:
mode = os.stat(filename).st_mode
if stat.S_ISDIR(mode):
fmt = 'Path "{name}" matching pattern "{pat}" is a directory.'
self.logger.warning(fmt.format(name=filename, pat=pattern),
location=(env.docname, self.lineno))
continue
# Tell Sphinx about the dependency and parse the file
env.note_dependency(os.path.abspath(filename))
self.__parse(result, filename)
# Parse the extracted reST
with switch_source_input(self.state, result):
node = nodes.section()
nested_parse_with_titles(self.state, result, node)
return node.children
def setup(app):
app.require_sphinx('3.0')
app.add_config_value('cautodoc_root', app.confdir, 'env')
app.add_config_value('cautodoc_compat', None, 'env')
app.add_config_value('cautodoc_clang', None, 'env')
app.add_directive_to_domain('c', 'autodoc', CAutoDocDirective)
return dict(version = __version__,
parallel_read_safe = True, parallel_write_safe = True)
# Copyright (c) 2016-2019 Jani Nikula <jani@nikula.org>
# Licensed under the terms of BSD 2-Clause, see LICENSE for details.
"""
Hawkmoth parser debug tool
==========================
python3 -m hawkmoth
"""
import argparse
import sys
from hawkmoth.parser import parse
def main():
parser = argparse.ArgumentParser(prog='hawkmoth', description="""
Hawkmoth parser debug tool. Print the documentation comments extracted
from FILE, along with the generated C Domain directives, to standard
output. Include metadata with verbose output.""")
parser.add_argument('file', metavar='FILE', type=str, action='store',
help='The C source or header file to parse.')
parser.add_argument('--compat',
choices=['none',
'javadoc-basic',
'javadoc-liberal',
'kernel-doc'],
help='Compatibility options. See cautodoc_compat.')
parser.add_argument('--clang', metavar='PARAM[,PARAM,...]',
help='Arguments to pass to clang. See cautodoc_clang.')
parser.add_argument('--verbose', dest='verbose', action='store_true',
help='Verbose output.')
args = parser.parse_args()
docs, errors = parse(args.file, compat=args.compat, clang=args.clang)
for (doc, meta) in docs:
if args.verbose:
print('# {}'.format(meta))
print(doc)
for (severity, filename, lineno, msg) in errors:
if filename:
print('{}: {}:{}: {}'.format(severity.name,
filename, lineno, msg), file=sys.stderr)
else:
print('{}: {}'.format(severity.name, msg), file=sys.stderr)
main()
# Copyright (c) 2016-2017 Jani Nikula <jani@nikula.org>
# Copyright (c) 2018-2020 Bruno Santos <brunomanuelsantos@tecnico.ulisboa.pt>
# Licensed under the terms of BSD 2-Clause, see LICENSE for details.
"""
Documentation comment extractor
===============================
This module extracts relevant documentation comments, optionally reformatting
them in reST syntax.
This is the part that uses Clang Python Bindings to extract documentation
comments from C source code. This module does not depend on Sphinx.
There are two passes:
#. Pass over the tokens to find all the comments, including ones that aren't
attached to cursors.
#. Pass over the cursors to document them.
There is minimal syntax parsing or input conversion:
* Identification of documentation comment blocks, and stripping the comment
delimiters (``/**`` and ``*/``) and continuation line prefixes (e.g. ``␣*␣``).
* Identification of function-like macros.
* Indentation for reST C Domain directive blocks.
* An optional external filter may be invoked to support different syntaxes.
These filters are expected to translate the comment into the reST format.
Otherwise, documentation comments are passed through verbatim.
"""
import enum
import re
import sys
from clang.cindex import CursorKind, TypeKind
from clang.cindex import Index, TranslationUnit
from clang.cindex import SourceLocation, SourceRange
from clang.cindex import TokenKind, TokenGroup
from hawkmoth.util import docstr, doccompat
class ErrorLevel(enum.Enum):
"""
Supported error levels in inverse numerical order of severity. The values
are chosen so that they map directly to a 'verbosity level'.
"""
ERROR = 0
WARNING = 1
INFO = 2
DEBUG = 3
def comment_extract(tu):
# FIXME: How to handle top level comments above a cursor that it does *not*
# describe? Parsing @file or @doc at this stage would not be a clean design.
# One idea is to use '/***' to denote them, but that might throw off editor
# highlighting. The workaround is to follow the top level comment with an
# empty '/**/' comment that gets attached to the cursor.
top_level_comments = []
comments = {}
cursor = None
current_comment = None
for token in tu.get_tokens(extent=tu.cursor.extent):
# handle all comments we come across
if token.kind == TokenKind.COMMENT:
# if we already have a comment, it wasn't related to a cursor
if current_comment and docstr.is_doc(current_comment.spelling):
top_level_comments.append(current_comment)
current_comment = token
continue
# Store off the token's cursor for a slight performance improvement
# instead of accessing the `cursor` property multiple times.
token_cursor = token.cursor
# cursors that are 1) never documented themselves, and 2) allowed
# between comment and the actual cursor being documented
if (token_cursor.kind == CursorKind.INVALID_FILE or
token_cursor.kind == CursorKind.TYPE_REF or
token_cursor.kind == CursorKind.PREPROCESSING_DIRECTIVE or
token_cursor.kind == CursorKind.MACRO_INSTANTIATION):
continue
if cursor is not None and token_cursor == cursor:
continue
cursor = token_cursor
# Note: current_comment may be None
if current_comment != None and docstr.is_doc(current_comment.spelling):
comments[cursor.hash] = current_comment
current_comment = None
# comment at the end of file
if current_comment and docstr.is_doc(current_comment.spelling):
top_level_comments.append(current_comment)
return top_level_comments, comments
def _result(comment, cursor=None, fmt=docstr.Type.TEXT, nest=0,
name=None, ttype=None, args=None, compat=None):
# FIXME: docstr.generate changes the number of lines in output. This impacts
# the error reporting via meta['line']. Adjust meta to take this into
# account.
doc = docstr.generate(text=comment.spelling, fmt=fmt,
name=name, ttype=ttype, args=args, transform=compat)
doc = docstr.nest(doc, nest)
meta = {'line': comment.extent.start.line}
if cursor:
meta['cursor.kind'] = cursor.kind,
meta['cursor.displayname'] = cursor.displayname,
meta['cursor.spelling'] = cursor.spelling
return [(doc, meta)]
# Return None for simple macros, a potentially empty list of arguments for
# function-like macros
def _get_macro_args(cursor):
if cursor.kind != CursorKind.MACRO_DEFINITION:
return None
tokens = cursor.get_tokens()
# Use the first two tokens to make sure this starts with 'IDENTIFIER('
one = next(tokens)
two = next(tokens, None)
if two is None or one.extent.end != two.extent.start or two.spelling != '(':
return None
# Naïve parsing of macro arguments
# FIXME: This doesn't handle GCC named vararg extension FOO(vararg...)
args = []
for token in tokens:
if token.spelling == ')':
return args
elif token.spelling == ',':
continue
elif token.kind == TokenKind.IDENTIFIER:
args.append(token.spelling)
elif token.spelling == '...':
args.append(token.spelling)
else:
break
return None
def _recursive_parse(comments, cursor, nest, compat):
comment = comments[cursor.hash]
name = cursor.spelling
ttype = cursor.type.spelling
if cursor.kind == CursorKind.MACRO_DEFINITION:
# FIXME: check args against comment
args = _get_macro_args(cursor)
fmt = docstr.Type.MACRO if args is None else docstr.Type.MACRO_FUNC
return _result(comment, cursor=cursor, fmt=fmt,
nest=nest, name=name, args=args, compat=compat)
elif cursor.kind in [CursorKind.VAR_DECL, CursorKind.FIELD_DECL]:
if cursor.kind == CursorKind.VAR_DECL:
fmt = docstr.Type.VAR
else:
fmt = docstr.Type.MEMBER
# If this is an array, the dimensions should be applied to the name, not
# the type.
dims = ttype.rsplit(' ', 1)[-1]
if dims.startswith('[') and dims.endswith(']'):
ttype = ttype.rsplit(' ', 1)[0]
name = name + dims
# If this is a function pointer, or an array of function pointers, the
# name should be within the parenthesis as in (*name) or (*name[N]).
fptr_type = re.sub(r'\((\*+)(\[[^]]*\])?\)', r'(\1{}\2)'.format(name),
ttype, count=1)
if fptr_type != ttype:
name = fptr_type
ttype = ''
return _result(comment, cursor=cursor, fmt=fmt,
nest=nest, name=name, ttype=ttype, compat=compat)
elif cursor.kind == CursorKind.TYPEDEF_DECL:
# FIXME: function pointers typedefs.
fmt = docstr.Type.TYPE
return _result(comment, cursor=cursor, fmt=fmt,
nest=nest, name=ttype, compat=compat)
elif cursor.kind in [CursorKind.STRUCT_DECL, CursorKind.UNION_DECL,
CursorKind.ENUM_DECL]:
# FIXME:
# Handle cases where variables are instantiated on type declaration,
# including anonymous cases. Idea is that if there is a variable
# instantiation, the documentation should be applied to the variable if
# the structure is anonymous or to the type otherwise.
#
# Due to the new recursiveness of the parser, fixing this here, _should_
# handle all cases (struct, union, enum).
# FIXME: Handle anonymous enumerators.
fmts = {CursorKind.STRUCT_DECL: docstr.Type.STRUCT,
CursorKind.UNION_DECL: docstr.Type.UNION,
CursorKind.ENUM_DECL: docstr.Type.ENUM}
fmt = fmts[cursor.kind]
# name may be empty for typedefs
result = _result(comment, cursor=cursor, fmt=fmt,
nest=nest, name=name if name else ttype, compat=compat)
nest += 1
for c in cursor.get_children():
if c.hash in comments:
result.extend(_recursive_parse(comments, c, nest, compat))
return result
elif cursor.kind == CursorKind.ENUM_CONSTANT_DECL:
fmt = docstr.Type.ENUM_VAL
return _result(comment, cursor=cursor, fmt=fmt,
nest=nest, name=name, compat=compat)
elif cursor.kind == CursorKind.FUNCTION_DECL:
# FIXME: check args against comment
# FIXME: children may contain extra stuff if the return type is a
# typedef, for example
args = []
# Only fully prototyped functions will have argument lists to process.
if cursor.type.kind == TypeKind.FUNCTIONPROTO:
for c in cursor.get_children():
if c.kind == CursorKind.PARM_DECL:
args.append('{ttype} {arg}'.format(ttype=c.type.spelling,
arg=c.spelling))
if cursor.type.is_function_variadic():
args.append('...')
fmt = docstr.Type.FUNC
ttype = cursor.result_type.spelling
return _result(comment, cursor=cursor, fmt=fmt, nest=nest,
name=name, ttype=ttype, args=args, compat=compat)
# FIXME: If we reach here, nothing matched. This is a warning or even error
# and it should be logged, but it should also return an empty list so that
# it doesn't break. I.e. the parser needs to pass warnings and errors to the
# Sphinx extension instead of polluting the generated output.
fmt = docstr.Type.TEXT
text = 'warning: unhandled cursor {kind} {name}\n'.format(
kind=str(cursor.kind),
name=cursor.spelling)
doc = docstr.generate(text=text, fmt=fmt)
meta = {
'line': comment.extent.start.line,
'cursor.kind': cursor.kind,
'cursor.displayname': cursor.displayname,
'cursor.spelling': cursor.spelling
}
return [(doc, meta)]
def clang_diagnostics(errors, diagnostics):
sev = {0: ErrorLevel.DEBUG,
1: ErrorLevel.DEBUG,
2: ErrorLevel.WARNING,
3: ErrorLevel.ERROR,
4: ErrorLevel.ERROR}
for diag in diagnostics:
filename = diag.location.file.name if diag.location.file else None
errors.extend([(sev[diag.severity], filename,
diag.location.line, diag.spelling)])
# return a list of (comment, metadata) tuples
# options - dictionary with directive options
def parse(filename, **options):
errors = []
args = options.get('clang')
if args is not None:
args = [s.strip() for s in args.split(',') if len(s.strip()) > 0]
if len(args) == 0:
args = None
index = Index.create()
tu = index.parse(filename, args=args, options=
TranslationUnit.PARSE_DETAILED_PROCESSING_RECORD |
TranslationUnit.PARSE_SKIP_FUNCTION_BODIES)
clang_diagnostics(errors, tu.diagnostics)
top_level_comments, comments = comment_extract(tu)
result = []
compat = lambda x: doccompat.convert(x, options.get('compat'))
for comment in top_level_comments:
result.extend(_result(comment, compat=compat))
for cursor in tu.cursor.get_children():
if cursor.hash in comments:
result.extend(_recursive_parse(comments, cursor, 0, compat))
# Sort all elements by order of appearance.
result.sort(key=lambda r: r[1]['line'])
return result, errors
# Copyright (c) 2016-2017, Jani Nikula <jani@nikula.org>
# Licensed under the terms of BSD 2-Clause, see LICENSE for details.
"""
Alternative docstring syntax
============================
This module abstracts different compatibility options converting different
syntaxes into 'native' reST ones.
"""
import re
# Basic Javadoc/Doxygen/kernel-doc import
#
# FIXME: One of the design goals of Hawkmoth is to keep things simple. There's a
# fine balance between sticking to that goal and adding compat code to
# facilitate any kind of migration to Hawkmoth. The compat code could be turned
# into a fairly simple plugin architecture, with some basic compat builtins, and
# the users could still extend the compat features to fit their specific needs.
def convert(comment, mode):
"""Convert documentation from a supported syntax into reST."""
# FIXME: try to preserve whitespace better
if mode == 'javadoc-basic' or mode == 'javadoc-liberal':
# @param
comment = re.sub(r"(?m)^([ \t]*)@param([ \t]+)([a-zA-Z0-9_]+|\.\.\.)([ \t]+)",
"\n\\1:param\\2\\3:\\4", comment)
# @param[direction]
comment = re.sub(r"(?m)^([ \t]*)@param\[([^]]*)\]([ \t]+)([a-zA-Z0-9_]+|\.\.\.)([ \t]+)",
"\n\\1:param\\3\\4: *(\\2)* \\5", comment)
# @return
comment = re.sub(r"(?m)^([ \t]*)@returns?([ \t]+|$)",
"\n\\1:return:\\2", comment)
# @code/@endcode blocks. Works if the code is indented.
comment = re.sub(r"(?m)^([ \t]*)@code([ \t]+|$)",
"\n::\n", comment)
comment = re.sub(r"(?m)^([ \t]*)@endcode([ \t]+|$)",
"\n", comment)
# Ignore @brief.
comment = re.sub(r"(?m)^([ \t]*)@brief[ \t]+", "\n\\1", comment)
# Ignore groups
comment = re.sub(r"(?m)^([ \t]*)@(defgroup|addtogroup)[ \t]+[a-zA-Z0-9_]+[ \t]*",
"\n\\1", comment)
comment = re.sub(r"(?m)^([ \t]*)@(ingroup|{|}).*", "\n", comment)
if mode == 'javadoc-liberal':
# Liberal conversion of any @tags, will fail for @code etc. but don't
# care.
comment = re.sub(r"(?m)^([ \t]*)@([a-zA-Z0-9_]+)([ \t]+)",
"\n\\1:\\2:\\3", comment)
if mode == 'kernel-doc':
# Basic kernel-doc convert, will document struct members as params, etc.
comment = re.sub(r"(?m)^([ \t]*)@(returns?|RETURNS?):([ \t]+|$)",
"\n\\1:return:\\3", comment)
comment = re.sub(r"(?m)^([ \t]*)@([a-zA-Z0-9_]+|\.\.\.):([ \t]+)",
"\n\\1:param \\2:\\3", comment)
return comment
# Copyright (c) 2016-2020 Jani Nikula <jani@nikula.org>
# Copyright (c) 2018-2020 Bruno Santos <brunomanuelsantos@tecnico.ulisboa.pt>
# Licensed under the terms of BSD 2-Clause, see LICENSE for details.
"""
Documentation strings manipulation library
==========================================
This module allows a generic way of generating reST documentation for each C
construct.
"""
import re
from enum import Enum, auto
class Type(Enum):
"""Enumeration of supported formats."""
TEXT = auto()
VAR = auto()
TYPE = auto()
STRUCT = auto()
UNION = auto()
ENUM = auto()
ENUM_VAL = auto()
MEMBER = auto()
MACRO = auto()
MACRO_FUNC = auto()
FUNC = auto()
# Dictionary of tuples (text indentation level, format string).
#
# Text indentation is required for indenting the documentation body relative to
# directive lines.
_doc_fmt = {
Type.TEXT: (0, '\n{text}\n'),
Type.VAR: (1, '\n.. c:var:: {ttype} {name}\n\n{text}\n'),
Type.TYPE: (1, '\n.. c:type:: {name}\n\n{text}\n'),
Type.STRUCT: (1, '\n.. c:struct:: {name}\n\n{text}\n'),
Type.UNION: (1, '\n.. c:union:: {name}\n\n{text}\n'),
Type.ENUM: (1, '\n.. c:enum:: {name}\n\n{text}\n'),
Type.ENUM_VAL: (1, '\n.. c:enumerator:: {name}\n\n{text}\n'),
Type.MEMBER: (1, '\n.. c:member:: {ttype} {name}\n\n{text}\n'),
Type.MACRO: (1, '\n.. c:macro:: {name}\n\n{text}\n'),
Type.MACRO_FUNC: (1, '\n.. c:macro:: {name}({args})\n\n{text}\n'),
Type.FUNC: (1, '\n.. c:function:: {ttype} {name}({args})\n\n{text}\n')
}
def _strip(comment):
"""Strip comment from comment markers."""
comment = re.sub(r'^/\*\*[ \t]?', '', comment)
comment = re.sub(r'\*/$', '', comment)
# Could look at first line of comment, and remove the leading stuff there
# from the rest.
comment = re.sub(r'(?m)^[ \t]*\*?[ \t]?', '', comment)
# Strip leading blank lines.
comment = re.sub(r'^[\n]*', '', comment)
return comment.strip()
def is_doc(comment):
"""Test if comment is a C documentation comment."""
return comment.startswith('/**') and comment != '/**/'
def nest(text, nest):
"""
Indent documentation block for nesting.
Args:
text (str): Documentation body.
nest (int): Nesting level. For each level, the final block is indented
one level. Useful for (e.g.) declaring structure members.
Returns:
str: Indented reST documentation string.
"""
return re.sub('(?m)^(?!$)', ' ' * nest, text)
def generate(text, fmt=Type.TEXT, name=None,
ttype=None, args=None, transform=None):
"""
Generate reST documentation string.
Args:
text (str): Documentation body.
fmt (enum :py:class:`Type`): Format type to use. Different formats
require different arguments and ignores others if given.
name (str): Name of the documented token.
ttype (str): Type of the documented token.
args (list): List of arguments (str).
transform (func): Transformation function to be applied to the
documentation body. This is useful (e.g.) to extend the generator
with different syntaxes by converting them to reST. This is applied
on the documentation body after removing comment markers.
Returns:
str: reST documentation string.
"""
text = _strip(text)
if transform:
text = transform(text)
if args is not None:
args = ', '.join(args)
(text_indent, fmt) = _doc_fmt[fmt]
text = nest(text, text_indent)
return fmt.format(text=text, name=name, ttype=ttype, args=args)
# Copyright (c) 2021, Jani Nikula <jani@nikula.org>
# Licensed under the terms of BSD 2-Clause, see LICENSE for details.
"""
Read the Docs Helpers
=====================
Helpers for setting up and using Hawkmoth on Read the Docs.
"""
import os
import subprocess
def clang_setup(force=False):
"""Try to find and configure libclang location on RTD."""
if 'READTHEDOCS' in os.environ or force:
try:
result = subprocess.run(['llvm-config', '--libdir'],
check=True, capture_output=True, encoding='utf-8')
libdir = result.stdout.strip()
# For some reason there is no plain libclang.so symlink on RTD.
from clang.cindex import Config
Config.set_library_file(os.path.join(libdir, 'libclang.so.1'))
except Exception as e:
print(e)
.. _how_to_build:
How To Build
============
If you just want to write MicroPython code for card10, you probably **won't**
need to build the firmware yourself. This page is for people **who want to work
on the underlying firmware itself**.
Dependencies
------------
* **gcc**, **binutils** & **newlib** for ``arm-none-eabi``: The packages have
slightly different names on different distros.
- Ubuntu / Debian:
.. code-block:: shell-session
apt install gcc-arm-none-eabi binutils-arm-none-eabi libnewlib-arm-none-eabi
- Arch:
.. code-block:: shell-session
pacman -S arm-none-eabi-gcc arm-none-eabi-binutils arm-none-eabi-newlib
- Fedora
.. code-block:: shell-session
dnf install arm-none-eabi-gcc arm-none-eabi-binutils arm-none-eabi-newlib
- macOS (Note: The card10 firmware team used Linux so far. macOS recommendations here are experimental.)
You can use `Homebrew`_ to install the required tools. The version of the
ARM crosscompiler tool chain is quite important; with the wrong version,
e.g. strip and/or ld might throw strange errors.
.. code-block:: shell-session
brew tap px4/px4
brew install px4/px4/gcc-arm-none-eabi-63
brew install coreutils
.. _Homebrew: https://brew.sh/
- Alternative: Download `ARM's GNU toolchain`_. **TODO**
* **python3**: For meson and various scripts needed for building.
* **meson** (>0.43.0) & **ninja**: Unfortunately most distros only have very old versions
of meson in their repositories. Instead, you'll probably save yourself a lot
of headaches by installing meson from *pip*.
- Ubuntu / Debian:
.. code-block:: shell-session
apt install ninja-build
pip3 install --user meson
- Arch (has latest *meson* in the repos):
.. code-block:: shell-session
pacman -S meson
- macOS
.. code-block:: shell-session
brew install ninja
pip3 install --user meson # see https://mesonbuild.com/Getting-meson.html - you will have to add ~/.local/bin to your PATH.
* One of two CRC packages is required. Pick one:
- Ubuntu / Debian / macOS
.. code-block:: shell-session
pip3 install --user crc16
or
.. code-block:: shell-session
apt install python3-crcmod
or
.. code-block:: shell-session
pip3 install --user crcmod
- Arch
.. code-block:: shell-session
pacman -S python-crc16
* **python3-pillow**: Python Image Library
.. code-block:: shell-session
pip3 install --user pillow
- Arch
.. code-block:: shell-session
pacman -S python-pillow
.. _ARM's GNU toolchain: https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm/downloads
Cloning
-------
Clone the ``master`` branch of the firmware repository:
.. code-block:: shell-session
$ git clone https://git.card10.badge.events.ccc.de/card10/firmware.git
Build Configuration
-------------------
Initialize the build-system using
.. code-block:: shell-session
$ ./bootstrap.sh
Additional arguments to ``bootstrap.sh`` will be passed to *meson*. You can
use this to for example, to enable one or more of the following optional
firmware features:
- ``-Ddebug_prints=true``: Print more verbose debugging log messages
- ``-Dble_trace=true``: Enable BLE tracing. This will output lots of status
info related to BLE.
- ``-Ddebug_core1=true``: Enable the core 1 SWD lines which are exposed on the
SAO connector. Only use this if you have a debugger which is modified for core 1.
.. warning::
Our build-system contains a few workarounds around short-comings in meson.
These workarounds might break on some setups which we did not yet test. If
this is the case for you, please open an issue in our `issue tracker`_!
.. _issue tracker: https://git.card10.badge.events.ccc.de/card10/firmware/issues
Building
--------
Build using *ninja*:
.. code-block:: shell-session
$ ninja -C build/
If ninja succeeds, the resulting binaries are in ``build/``. They are
available in two formats: As an ``.elf`` which can be flashed using a debugger
and as a ``.bin`` which can be loaded using the provided bootloader. Here is a
list of the binaries:
- ``build/bootloader/bootloader.elf``: Our bootloader. It should already be on
your card10. The bootloader can only be flashed using a debugger.
- ``build/pycardium/pycardium_epicardium.bin``: The entire firmware in one ``.bin``.
- ``build/epicardium/epicardium.elf``: The core 0 part of the firmware, called Epicardium.
- ``build/pycardium/pycardium.elf``: Our MicroPython port, the core 1 part of the firmware.
In order to do a rebuild you can issue a clean command to ninja via
.. code-block:: shell-session
$ ninja -C build/ -t clean
Otherwise, rerunning ``./bootstrap.sh`` will also clean the build-directory.
.. note::
**macOS**: If ``strip`` fails to work on the freshly compiled ``mpy-cross``:
"strip: object: (...)/lib/micropython/micropython/mpy-cross/mpy-cross
malformed object (unknown load command 9)", you a likely not using the
`strip` that matches to your ``clang``. Do ``which strip && which clang``,
and if the paths don't match, clean up your PATHs, or as a quick hack,
create a symlink for strip.
.. note::
If you try to flash pycardium_epicardium.bin (renamed to card10.bin)
and the bootloader does not finish updating, the file might be too large.
~700kB is the normal size, but problems were reported where the file size
was >1MB. This was caused by the ``tr`` tool in the build process
(it's supposed to create a large file with 0xff in it) - this requires the
LC_ALL environment variable to be not set, or set to "C"
(but not UTF8 or similar).
Docker
======
Alternatively, clone the ``master`` branch of the firmware repository and enter it:
.. code-block:: shell-session
$ git clone https://git.card10.badge.events.ccc.de/card10/firmware.git
$ cd firmware
Afterwards, build a docker-container which will build the firmware via:
.. code-block:: shell-session
$ docker build -f docker/Dockerfile_fwbuild -t card10-firmware-builder .
Now, you can start the container with the firmware directory mounted, which will build the
firmware into the firmware/build directory:
.. code-block:: shell-session
$ docker run -v $(pwd):/firmware card10-firmware-builder
How To Flash
============
Depending on whether you have a debugger or not, you have to use a different
method of flashing:
Flash Without Debugger
----------------------
If you do not have a debugger, you have to update the firmware using our
bootloader by going into :ref:`usb_file_transfer`.
After you get your badge into :ref:`usb_file_transfer`, you can drop the firmware's
``.bin`` (from ``build/pycardium/pycardium_epicardium.bin`` in most cases) into
this flash-storage. You **must** call the file ``card10.bin`` for the
bootloader to use it.
The bootloader will then display ``Writing.`` in red while it is actually
writing the file to external flash. Please wait until it displays ``Ready.``
again before resetting card10 by pressing the power button again.
Flash Using Debugger
--------------------
First, setup everything as explained on the :ref:`debugger` page. Following
that and after connecting to card10, you can flash your binary using the
``load`` command. After loading, you need to use ``reset`` to reboot card10
using your new firmware.
.. warning::
With the current version of the bootloader, before attempting to flash using
the debugger, make sure there is no ``card10.bin`` stored on the device.
If there is, the bootloader will overwrite whatever you just flashed after
reboot every time.
.. code-block:: text
(gdb) load
Loading section .text, size 0x12514 lma 0x10010000
Loading section .ARM.exidx, size 0x8 lma 0x10022514
Loading section .data, size 0x8d8 lma 0x1002251c
Start address 0x10012160, load size 77300
Transfer rate: 19 KB/sec, 11042 bytes/write.
(gdb)
.. note::
If OpenOCD was able to connect, but GDB gives you an
.. code-block:: text
Error erasing flash with vFlashErase packet
error, issue a ``reset`` command, quickly followed by a ``load`` command.
Reason: The Epicardium puts parts of the CPU to sleep and the debugging
interface is part of that. After a reset the bootloader starts up
and lets OpenOCD/GDB take control again.
card10 firmware docs
====================
**Dear traveller,**
these transcripts describe how you can write code for your card10. This
includes the Python modules that are available but also documentation of the
lower level firmware components.
If you want to write Python code for card10, you will want to take a look at
the :ref:`Pycardium <pycardium_overview>` docs. If you are interested in writing applications
in other languages, you'll probably want to interface with
:ref:`Epicardium API <epicardium_api_overview>` directly.
Last but not least, if you want to start hacking the lower-level firmware, the
:ref:`Firmware <firmware_overview>` section of these docs is a good starting place.
.. toctree::
:maxdepth: 1
:caption: Pycardium
pycardium/overview
pycardium/stdlib
pycardium/bhi160
pycardium/ble_hid
pycardium/bme680
pycardium/max30001
pycardium/max86150
pycardium/buttons
pycardium/color
pycardium/config
pycardium/display
pycardium/gpio
pycardium/leds
pycardium/light-sensor
pycardium/os
pycardium/personal_state
pycardium/png
pycardium/power
pycardium/pride
pycardium/simple_menu
pycardium/utime
pycardium/vibra
pycardium/ws2812
.. toctree::
:maxdepth: 1
:caption: Firmware
overview
card10-cfg
usb-file-transfer
how-to-build
how-to-flash
debugger
pycardium-guide
memorymap
epicardium/mutex
epicardium/sensor-streams
.. toctree::
:maxdepth: 1
:caption: Epicardium API
epicardium/overview
epicardium/api
epicardium-guide
epicardium/internal
.. toctree::
:maxdepth: 1
:caption: Bluetooth
bluetooth/overview
bluetooth/ess
bluetooth/file-transfer
bluetooth/card10
bluetooth/ecg
bluetooth/nimble
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`