Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • flow3r/flow3r-firmware
  • Vespasian/flow3r-firmware
  • alxndr42/flow3r-firmware
  • pl/flow3r-firmware
  • Kari/flow3r-firmware
  • raimue/flow3r-firmware
  • grandchild/flow3r-firmware
  • mu5tach3/flow3r-firmware
  • Nervengift/flow3r-firmware
  • arachnist/flow3r-firmware
  • TheNewCivilian/flow3r-firmware
  • alibi/flow3r-firmware
  • manuel_v/flow3r-firmware
  • xeniter/flow3r-firmware
  • maxbachmann/flow3r-firmware
  • yGifoom/flow3r-firmware
  • istobic/flow3r-firmware
  • EiNSTeiN_/flow3r-firmware
  • gnudalf/flow3r-firmware
  • 999eagle/flow3r-firmware
  • toerb/flow3r-firmware
  • pandark/flow3r-firmware
  • teal/flow3r-firmware
  • x42/flow3r-firmware
  • alufers/flow3r-firmware
  • dos/flow3r-firmware
  • yrlf/flow3r-firmware
  • LuKaRo/flow3r-firmware
  • ThomasElRubio/flow3r-firmware
  • ai/flow3r-firmware
  • T_X/flow3r-firmware
  • highTower/flow3r-firmware
  • beanieboi/flow3r-firmware
  • Woazboat/flow3r-firmware
  • gooniesbro/flow3r-firmware
  • marvino/flow3r-firmware
  • kressnerd/flow3r-firmware
  • quazgar/flow3r-firmware
  • aoid/flow3r-firmware
  • jkj/flow3r-firmware
  • naomi/flow3r-firmware
41 results
Select Git revision
Show changes
Showing
with 1931 additions and 854 deletions
docs/app/guide/assets/qwiic_inside.jpg

112 KiB

docs/app/guide/assets/qwiic_pinout_overlay.jpg

109 KiB

docs/app/guide/assets/qwiic_schematic.png

67 KiB

File moved
Basics
======
Introduction
^^^^^^^^^^^^
Once you feel familiar with the tools from the previous section, you're ready to
advance to the next chapter: writing full-fledged applications that can draw
graphics on the screen, respond to input and play sound!
The :ref:`st3m` framework is the main Python codebase you'll be writing against.
Instead of using standard Micropython libraries like ``machine`` or low level
display drivers, the :ref:`st3m` framework provides custom modules made for
applications cooperating peacefully within the operating system. For applications,
we'll be using the ``Application`` class from the ``st3m.application`` module.
You can find the documentation for this and other relevant modules in the `App API`
sections.
flow3r applications are very straightforward: Almost everything is handled in a
single loop that just runs over and over for as long as your application is active.
This loop calls the ``.think()`` method, which is the central almost-everything
processor of flow3r.
The one thing that isn't handled by ``.think()`` is drawing to the display. This
is taken care of by the ``.draw()`` method; the reason for this split is that the
framerate may be much slower than you want to react to inputs; for a musical
instrument application that plays notes, ``.think()`` can typically operate with
less than 10ms reaction time, while the display often is drawn every 30ms or less.
The operating system also needs to do its thing, so all in all our central loop
in its simplest form looks like this:
.. code-block:: python
# this is very simplified, object/method names are largely made up
# ins, delta_ms and ctx are conjured out of thin air for now
while True:
os.do_things()
application.think(ins, delta_ms)
if graphics_backend.can_receive_data_for_next_frame():
application.draw(ctx)
Now that we have a rough idea of how things are called, let's examine the two
applications that we have introduced in the `Blinky` section:
Input Processing: CaptouchBlinky explained
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Let's start with ``CaptouchBlinky``. There's just 4 methods:
**__init__**
.. code-block:: python
def __init__(self, app_ctx):
super().__init__(app_ctx)
self.colors = (
(1,1,0),
(0,1,1),
(1,0,1),
(0,1,0),
(0,0,1),
)
self.active_color = self.colors[0]
This method is called only once per boot when the application is opened
for the first time (as opposed to ``on_enter()`` as explained further down).
It sets up some data for our application: ``colors`` is a tuple of 5 RGB
values, ``.active_color`` is a reference to whichever color is active a
the moment. Simple enough, but there's one more thing:
The ``super().__init__(app_ctx)`` call is easily forgotten, but
it is important to set up the environment for the application properly.
This is implemented in the ``Application`` class, which we can invoke
with the ``super()`` method. This is a recurring pattern across application
methods: We merely *enhance* them while still retaining the original
("super") parent class behavior.
This may sound complicated if you're not familiar with Python, but in
practice it can be quite simple: Look up the documentation of
``st3m.application.Application`` for each method you are implementing,
it will tell you what to do. As a rule of thumb, calling the ``super()``
variant at the start of your implementation rarely hurts.
**think**
.. code-block:: python
def think(self, ins, delta_ms):
super().think(ins, delta_ms)
for x in range(0, 10, 2):
if self.input.captouch.petals[x].whole.pressed:
self.active_color = self.colors[x//2]
leds.set_all_rgb(*self.active_color)
leds.update()
This is the platonic ideal of ``.think()``: We are processing some input,
mangling it a little and then updating a driver accordingly. ``self.input``
provides us with edge detection (see ``st3m.input.InputController``) to
return True only in the think cycle where the user has just started pressing
a petal.
Again, there's a ``super()`` call. Try removing it, and you'll notice that
your captouch input just broke; the edge detection runs in ``.super().think()``,
and nobody is executing that.
Needless to say, care should be taken to not let ``think`` run for very long
in most cases: If executing each ``.think()`` takes say 50ms, your UI will
react very delayed or even drop shoulder button inputs. This isn't always
possible or even important: Say you're loading a big file for something and
just display a loading screen without reading any inputs; you don't need to
care about think rate in that situation. As soon as it is fully loaded though
you probably want it to go back up into the 5-20ms range of course.
**draw**
.. code-block:: python
def draw(self, ctx):
ctx.rgb(*self.active_color).rectangle(-120, -120, 240, 240).fill()
flow3r is powered by the `ctx <https://ctx.graphics/>`_ graphics engine. The
``ctx`` object passed here allows you to generate a drawlist that is then sent
off to the rendering engine when the ``draw`` method is complete. In this case,
we fill the entire screen with one color. For details on ctx, look up its
documentation in the API section or `here <https://ctx.graphics/uctx/>`_
Note that this method does not block until the render is complete, but rather
just prepares the render instructions, while the image is rendered by a different
task that runs independently of micropython. The draw method is only called again
when that render task is ready to receive new data.
Note the absence of a ``super()`` call this time; there's simply nothing to do in
the default case. We can call it anyways, it won't hurt, but it also doesn't do
anything.
One last detail: We draw every single frame here, even if the image hasn't changed.
We don't have to do that, in fact it is recommended to consider only to partially
redraw your screen, but that topic deserves its own section.
**get_help**
.. code-block:: python
def get_help(self):
context_sensitive_help = (
"This app changes color of all LEDs and the display "
"when touching a top petal. "
)
context_sensitive_help += f"The current RGB values are {self.active_color}."
return context_sensitive_help
flow3r has a built-in help reader that allows application programmers to provide
a user manual. This allows to build deep UIs without having to explain it all
on-screen. Even for the most trivial application it is good practice to add a small
text here to explain what to expect from it; even simple features can be missed if
users aren't sure what they are supposed to be looking for.
For more involved applications, note that we can change the text output of this
method depending on state; it is called again each time the user opens the help
menu, so you can provide context sensitive help to protect the user from
having to scroll through a huge wall of text.
No ``super()`` call needed here again, we leave it as an exercise to the reader
to determine the reasons for this ;).
Writing an application for the menu system
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To add the application to the menu we are missing one more thing: a `flow3r.toml`
file which describes the application so flow3r knows where to put it in the menu system.
Together with the Python code this file forms a so called bundle
(see also :py:class:`BundleMetadata`).
::
[app]
name = "CaptouchBlinky"
category = "Apps"
[entry]
class = "Demos"
[metadata]
author = "an identifying and/or empty string"
license = "pick one, LGPL/MIT/CC0 maybe?"
url = "https://git.flow3r.garden/you/mydemo"
Save this as `flow3r.toml` together with the Python code as `__init__.py` in a
folder (name doesn't matter) and put that folder into one of the possible
application directories (see below) using `Disk Mode`_. Restart the flow3r and
it should pick up your new application.
+--------+----------------------+---------------------+---------------------------------------+
| Medium | Path in Disk Mode | Path on Badge | Notes |
+========+======================+=====================+=======================================+
| Flash | ``sys/apps`` | ``/flash/sys/apps`` | “Default” apps. |
+--------+----------------------+---------------------+---------------------------------------+
| Flash | ``apps`` | ``/flash/apps`` | Doesn't exist by default. Split |
| | | | from ``sys`` to allow for cleaner |
| | | | updates. |
+--------+----------------------+---------------------+---------------------------------------+
| SD | ``apps`` | ``/sd/apps`` | Doesn't exist by default. Will be |
| | | | retained even across badge reflashes. |
+--------+----------------------+---------------------+---------------------------------------+
Note that if we now start this app from the flow3r menu instead of ``mpremote``, then exit
and re-enter the last selected color is still active. This is because the application object
never was destroyed, so the ``.active_color`` attribute has not been reset.
OS Integration: AutoBlinky explained
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Now that we know how ``CaptouchBlinky`` works, ``AutoBlinky`` provides little excitement,
but it makes for an interesting example for OS Integration. Before going into that, let's
rush throug the singular interesting bit:
.. code-block:: python
def think(self, ins, delta_ms):
super().think(ins, delta_ms)
self.timer_ms += delta_ms
self.timer_ms %= self.blink_time_ms * len(self.colors)
index = self.timer_ms // self.blink_time_ms
self.active_color = self.colors[index]
leds.set_all_rgb(*self.active_color)
leds.update()
We're using the ``delta_ms`` argument of ``.think()`` this time to make the LEDs change
color at a somewhat constant time interval. It is only *somewhat* constant because
``.think()`` doesn't react to the event directly; it just checks whenever a think cycle
starts whether enough time has passed, which may be more. Note that the code is structured
in such a way that this error does not accumulate.
Depending on your programming background you might wonder how to install an ISR timer or the
likes to trigger such processes on a fixed interval; this is not quite trivial with micropython
and it generally does not encourage this use case. At this point in time flow3r has no official
concurrency support in micropython execution.
Let's make this code a tiny bit more interesting and modify it to have a few more colors to
step through:
.. code-block:: python
import random
class MegaBlinky(AutoBlinky):
def __init__(self, app_ctx):
super().__init__(app_ctx)
self.timer_ms = 0
self.colors = (
(
random.random(),
random.random(),
random.random()
)
for x in 10000)
self.active_color = self.colors[0]
self.blink_time_ms = 500
Let's see, 10000 colors, 3 floats each, 32bit per float, say 100% micropython overhead - that
is at least 240kB of data. Application objects are never destroyed, so this data shall live in
RAM forever and all eternity or until the next power cycle, whichever happens earlier. Let's be
nice and get rid of it when we exit the app:
.. code-block:: python
class PoliteMegaBlinky(AutoBlinky):
def __init__(self, app_ctx):
super().__init__(app_ctx)
self.timer_ms = 0
self.blink_time_ms = 500
def on_enter(self, vm):
super().on_enter(vm)
self.colors = (
(
random.random(),
random.random(),
random.random()
)
) for x in 10000)
self.active_color = self.colors[0]
def on_exit(self):
super().on_exit()
self.colors = None
The ``.on_enter()`` method gets called every time the app is launched from the menu (unlike
``.__init__()``, which is only called the first time the app is launched during that boot cycle).
``.on_exit()`` is called on each exit. These are called before the first and after the last
``.think()`` call of the "opened phase" respectively. *Note that the latter doesn't hold true
for``draw()``: It may be called after ``on_exit()`` has executed.*
These two method once again require their respective ``super()`` calls at the beginning.
Since this deinitializes resources, it is easy to accidentially make apps crash when exiting
and/or re-entering when you access them. Since you cannot test this with ``mpremote run`` as
it doesn't allow you to use the normal OS utilities and exit/re-enter the app, so the final
testing phase of each app should include installing it on flow3r manually as described in
the previous section.
This example may seem a bit silly, but there's a few resource types you really want to avoid
hogging, such as open files or bl00mbox channels.
Another use of these functions is setting up hardware; when you exit an application, many
hardware settings are overriden by the operating system and not restored upon reentry. Let's
say we want to have smooth color transitions in the example above, this would also be a job
for ``.on_enter()`` instead of ``__init__()`` to make sure is applied at the second start
as well:
.. code-block:: python
class SmoothPoliteMegaBlinky(PoliteMegaBlinky):
def on_enter(self, vm):
super().on_enter(vm)
self.colors = (
(
random.random(),
random.random(),
random.random()
)
) for x in 10000)
self.active_color = self.colors[0]
leds.set_slew_rate(min(leds.get_slew_rate(), 100))
Handling assets
^^^^^^^^^^^^^^^
Using `Application` also gives you access to the `ApplicationContext` as ``self.app_ctx``,
which for example gives you a way to find out the base path of your app in ``app_ctx.bundle_path``
or its bundle metadata in ``app_ctx.bundle_metadata``. It's very important not to hardcode
paths to your assets and use `bundle_path` instead, because applications can be installed
in various places depending on installation method and moved between internal flash
and SD card.
A simple app that does nothing but draws a file named ``image.png`` from its directory
could look like this:
.. code-block:: python
from st3m.application import Application
from ctx import Context
import st3m.run
class MyDemo(Application):
def draw(self, ctx: Context) -> None:
# Draw a image file
ctx.image(f"{self.app_ctx.bundle_path}/image.png", -120, -120, 240, 240)
if __name__ == '__main__':
# Continue to make runnable via mpremote run.
st3m.run.run_app(MyDemo, "/flash/apps/MyDemo")
Note the default path provided in ``st3m.run.run_app`` call - this is the path
that's going to be used when running standalone via `mpremote`, so set it to
the path where you're putting your app files on your badge during development.
Best practices
^^^^^^^^^^^^^^
Before you submit your application to the app store, here's a checklist for some common
pitfalls and the like:
**General applications**
- **Do not use sys_* API:** The sys_* modules (like ``sys_audio``) are not intended to be used by
applications, they are for the operating system only. If you have a legitimate use case to expose
some sys_* features to applications we'd kindly ask you to open an issue and we'll see how we can do
it safely. Some limitations however are necessary and/or intentional.
- **Check for crashes on re-entry:** Does your application work well when you exit and re-enter? Does it
do so in any state?
- **Don't hog resources:** Does your application free all significant resources (open files,
bl00mbox channels, etc.) when it exits?
- **Avoid stale data on re-entry:** Does your application still use data (for example IMU orientation)
from its previous run after exiting and re-entering? Should it do that?
- **Hardware config not applied properly** Most hardware configs need to be applied every time you
enter the application. If you do it in ``.__init__()`` instead it might work on the first entry,
but not on later re-entry.
- **Account for either button orientation:** Does your application still make sense if you swap App
and OS button in the global configuration?
- **Provide a nice help text:** Do you think you provide enough information so that people can figure
out your application?
- **If overriding OS button, provide proper exit paths:** If you press down the app button enough times,
every app should either eventually exit or make it clear to the user how it's intended to be exited.
This is usually handled automatically, but in case you use the override this is your responsibility.
- **Do not load from/save to flash:** Saving to flash force-stops the music pipeline momentarily, which
is very bad in a session. This is a hardware limitation and cannot be fixed, therefore apps should
never save to flash.
- **Adhere to savefile best practices:** See the Savefiles documentation page for how to integrate them
nicely within flow3r's operating system.
**Music applications**
- **Do not override volume control:** We are reserving this "UI namespace" for an upcoming feature. Do
not override the volume controls for music applications. It's also just rude if stuff starts playing
loudly and you can't quickly turn it down.
- **Adhere to bl00mbox best practices:** They are not yet in a convenient list for now but just in prose
in their documentation section; we'll provide a more compact version soon, but for now please do
carefully read the sections for all features that you intend to use.
(These lists are an incomplete work in progress)
Distributing applications
^^^^^^^^^^^^^^^^^^^^^^^^^
We have an "App Store" where you can submit your applications: https://flow3r.garden/apps/
To add your application, follow the guide in this repository: https://git.flow3r.garden/flow3r/flow3r-apps
.. _bl00mbox:
bl00mbox
========
bl00mbox is an audio synthesis and processing engine. Its primary documentation for this version is hosted
`here <https://moon2embeddedaudio.gitlab.io/bl00mbox/flow3r_dev>`_ and covers general use cases. This page does not duplicate that
documentation, but rather focuses on bl00mbox in the context of flow3r. Feel free to read these documents in any order,
we will introduce basic bl00mbox concepts here briefly as needed.
Environment
-----------
flow3r is intended as a music toy where multiple applications can make sound together at the same time. Since only one
user interface can be active at a given time, there is a distinction between applications making sound in the foreground
and in the background. These multiple sources are managed by the **system mixer**. To get a feeling for it, open a music
application, hold the OS button down for a second and select the mixer page: You will see that a corresponding *named*
channel has appeared in the mixer and that you can control its volume seperately (from ``media``, for example). If you exit
the application, the channel probably has disappeared.
That is, unless you have picked one of the apps that may continue playing in the background, such as *gay drums*. Try
opening it, create a beat and exit while it continues to play - you can now enter another music application and see the
two coexisting channels in the system mixer. This is not only useful for relative volume control; if you mute one of these
channels in the mixer it stops rendering and thereby using CPU entirely. If you mute a *gay drums* beat for a tiny bit,
you will notice that its sequence has not progressed when you unmute it.
These channels map directly to bl00mbox ``Channel`` objects, which are the central interfaces to create, connect and manage
plugins. In almost all cases each music application uses a single channel whose name is identical to the application name.
If you find a good reason to create multiple channels in a single application, their names should make it clear which
application they belong to (and of course also fit in the system mixer channel box).
If you have been around since the early days of flow3r, you may remember that music applications had a tendency to
sometimes produce sound unwanted or to idly consume CPU/RAM indefinitely after having been closed. For a user that wishes
to use background channels as a session for example without rebooting flow3r all the time this is inconvenient, so we
have tried our best to clean up things automatically, but it doesn't go all the way (RAM, for example). We could enforce
this from the OS side by cleaning up resources more aggressively with the tradeoff that misbehaving apps crash more,
but we'd prefer to trust application developers to carefully to manage resources.
**Please follow the resource management guidelines presented here so that flow3r can grow into a flexible and reliable
multitrack music toy!**
Make sound
----------
Let's make a simple application that uses bl00mbox in its most basic form. The block we have seen in the mixer earlier
is a ``Channel`` object. It is initialized with a descriptive name (typically the application name) so that the user
knows what is what in the mixer. A channel can create plugins which create or modify sound. It also provides an audio
signal from the line input, as well as another that line output that routes to the mixer.
.. code-block:: python
import bl00mbox
from st3m.application import Application
class Beeper(Application):
def __init__(self, app_ctx):
super().__init__(app_ctx)
# empty slot for the channel
self.blm = None
def on_enter(self, vm):
super().on_enter(vm)
# this method creates a synthesizer and stores it in self.blm
self.build_synth()
def on_exit(self):
super().on_exit()
# important resource management: delete the channel when exiting.
# this also deletes all plugins.
self.blm.delete()
# good practice: also allow the now-"hollow" channel object to be
# garbage collected to save memory.
self.blm = None
def build_synth(self):
# create an empty channel
self.blm = bl00mbox.Channel("Beeper")
# create an internal mixer with 10 inputs. note: this is not the system mixer.
self.mixer = self.blm.new(bl00mbox.plugins.mixer, 10)
# connect the output of the mixer plugin to the line output of the channel
self.blm.signals.line_out << self.mixer.signals.output
for x in range(10):
# mute the x-th mixer input
self.mixer.signals.input_gain[x].mult = 0
# create an oscillator
beep = self.blm.new(bl00mbox.plugins.osc)
# give it a unique pitch
beep.signals.pitch.tone = x
# connect it to the x-th mixer input
beep.signals.output >> self.mixer.signals.input[x]
# note: the oscillator may go out of scope here without being garbage
# collected. any plugin that is connected (directly or via other plugins)
# to the line out is considered reachable by bl00mbox.
def think(self, ins, delta_ms):
super().think(ins, delta_ms)
for x in range(10):
# unmute the x-th mixer input if corresponding petal
# is pressed to play a sound.
volume = 1 if ins.captouch.petals[x].pressed else 0
self.mixer.signals.input_gain[x].mult = volume
# note: if it wasn't for deleting the channel in on_exit() this would just
# continue playing sound if exited while a petal is held.
This app frees all resources that it doesn't need anymore, simply by calling ``.delete()`` on the channel.
Further attempts to interact with that channel and its plugins will result in a ``bl00mbox.ReferenceError``, so
a new one must be created when re-entering. This is okay; the OS recognizes the name of the channel and applies
all the previous mixer settings again. A name should therefore not only be **descriptive**, but also **unique**.
But not to worry, you don't need to check every app ever, if your application name is unique in the app store
and you use it for the channel you have done due diligence.
This application is almost well behaved and ready to ship, but there's one more thing we should do first to make
users happy:
Normalize volume
----------------
It is desirable that all music applications default to a similar volume level. You might say, why not just the
maximum volume without clipping?, but there is this nasty little thing called crest ratio: The maximum peak of
an audio signal is very poorly correlated to its volume. The square wave we generated above is very very loud
compared to its maximum peak, but a more delicate sound such as an acoustic instrument sample may hopelessly
disappear next to it even if it fills all the range. A good default should be allow for a fair amount of wiggle
room for all these cases, so we've made an arbitrary decision:
**flow3r instruments should aim for a typical volume of -15dB rms**.
This volume adjustment must be done manually, but worry not, we provide utilities that make this fast and easy.
The most universal approach is to tell a channel to keep track of its line out volume and print it to the REPL.
This should of course only be **temporary during development**; measuring volume takes away CPU from the audio
engine which could otherwise use to render other channels for example, printing it reduces your think rate.
It's just 2 lines, it's not a big deal to comment them out. Let's modify our application:
.. code-block:: python
class Beeper(Application):
def build_synth():
# (same body as before)
# activate volume measurement
self.blm.compute_rms = True
def think(self, ins, delta_ms):
# (same body as before)
# print current volume in decibels
print(self.blm.rms_dB)
The print rate may be very high, you can always add a temporary sleep or counter or close the connection on the
host side. We can see that the volume changes with the amount of petals we're pressing. This begs the next question:
What amount of petals do we normalize to? The answer is very unsatisfying: Whatever is typical for that application.
In this case you probably would play 1 or 2 notes at the same time normally; if it's more, it's a special case
and allowed to be louder. That's just personal intuition and other answers may be justifiable too, but it's a fairly
reasonable guess. If users don't like it they may fine tune in the mixer after all, you're just providing some
general purpose default setting.
Let's measure then! With 1 petal pressed we're getting -34dB, with 2 petals it's about -31dB, so to reach our
target of -15dB we need to increase volume by 17.5dB. Conveniently the channel has a volume control just for that
(seperately from the mixer volume control, which this object has no access to). Unforunately it defaults to -12dB,
and its maximum level is 0dB, so we can't increase it enough. Why is our volume so low in the first place? Mixer
plugins are initialized so that all inputs can be processed without clipping, which means the output gain of the
mixer is set to a multiplier of 0.1, or -20dB. We can get the missing 5.5dB from the mixer plugin (this comes at
the cost of clipping when more than 5 voices are playing. We'll discuss that issue in the Performance section):
.. code-block:: python
class Beeper(Application):
def build_synth():
# (same body as before)
# done with this, remove
# self.blm.compute_rms = True
# apply as much of the volume difference as we can here
self.blm.gain_dB += 12
# put the rest in the mixer plugin
self.mixer.signals.gain.dB += 5.5
def think(self, ins, delta_ms):
# (same body as before)
# done with this, remove
# print(self.blm.rms_dB)
This isn't all that hard, but there is an even easier way! You might have noticed a peculiar quality: When we go into
the system mixer, we actually do not exit the application, it is just suspended! This means if you enter the mixer
*while holding a petal* the sound continues to play indefinitely - is that bad behavior? Should we squash it? Nay, au
contraire, it is desirable! Say the user wants to readjust volume, it would be awfully useful to hear your adjustment
while in the mixer, right? Let's keep it! But we can also use it for development: Try it, and you will notice that
the mixer activates volume measurement and displays it in the channel. If you look closely, there is a little notch next
to it too: This is our normalization notch that you should aim for.
While this method doesn't give you an absolute value, it is much better at displaying the dynamic behavior; our little
Beeper here is fairly static, but if there's more movement in the volume it might be hard to follow by just reading the
printout. In such a dynamic case, you should normalize so that the loud bits linger mostly around the notch. Going above
a little bit for a quick peak is okay. It's somewhat hard to make a static set of rules for this; when in doubt, compare
to similar stock applications, and don't start a loudness war :D!
Run in background
-----------------
The above example is designed to free all resources when exited, but didn't we say earlier these could run in the background?
Guess what - more rules and best practices first :P! It's actually pretty simple:
Firstly, you should give users the option to **not** have your channel run in the background after exiting. Ideally this
option should be obvious and the default. An example: gay drums destroys its channel if the sequencer is not running, i.e.
the drumbeat is not playing (or the track is empty, so that it is kind of playing but actually not). We can directly adapt
this approach to our Beeper; if we hold a petal while exiting, we may continue playing. Didn't we say earlier that this
was bad?
Well, only if it is unintentional - and it only is unintentional if it's not in the **help text**! This one can be accessed
right next to the mixer and should ideally contain all there is to know about your application (it needn't all be in the
same string, remember that ``.get_help()`` may change its output depending on application state). Let's add this to our next
iteration, and we're golden!
Secondly, what if a user just wants to be done with that background channel without navigating to and through your app, or
if some application has a bug and cannot *not* play in the background by accident? Remember that ``blm.delete()`` method
from earlier - the mixer can call it too. Not on the currently active foreground channel, so if your app doesn't do
backgrounding you don't have to worry about it, but if it does it need to check after re-entry if the channel still
exists, else it might crash with a ``bl00mbox.ReferenceError``.
One last thing before we write some code: What's that currently active foreground channel? Well, simply put, only one
channel is in the foreground at any given time. Most interactions with a channel or its plugins set it as the foreground
channel automatically. Exiting an application clears the foreground channel too. If we want to have a channel rendered
that is not currently foregrounded, we must explicitely set the ``.background_mute_override`` attribute. As a general rule
of thumb, if a channel does not have this attribute set it should be deleted when exiting the application in order to not
waste RAM. The OS is not automatically doing it. **For now** :P.
.. code-block:: python
class BackgroundBeeper(Beeper):
def __init__(self, app_ctx):
super().__init__(app_ctx)
self.blm = None
self.any_playing = False
def on_enter(self, vm):
super().on_enter(vm)
if self.blm is not None:
try:
self.blm.foreground = True
except bl00mbox.ReferenceError:
self.blm = None
if self.blm is None:
self.build_synth()
def on_exit(self):
super().on_exit()
if self.any_playing:
self.blm.background_mute_override = True
else:
self.blm.delete()
self.blm = None
def think(self, ins, delta_ms):
super().think(ins, delta_ms)
self.any_playing = False
for x in range(10):
if ins.captouch.petals[x].pressed:
self.mixer.signals.input_gain[x].mult = 1
self.any_playing = True
else:
self.mixer.signals.input_gain[x].mult = 0
def get_help(self):
ret = ( "Simple synthesizer, each petal plays a different note. "
"If you exit while holding a petal that note continues "
"playing in the background to allow for drones." )
return ret
But wait, there's more! The above approach allows us to do anything in the background that the standalone audio engine
can do; we could modulate volume with a low frequency sine wave easily if we wanted to for example, but that's not really
convenient or appropriate for many things. To make things more flexible, we can also attach a micropython callable to it
which gets called by the OS regularily as long as the channel is rendered (at the end of each main loop to be exact, see
``st3m.application``). If the channel is muted it is not called. bl00mbox itself doesn't specify the arguments, but for
flow3r purposes we call it think-like with ``ins`` and ``delta_ms`` as positional arguments.
This callback can do anything think can do and obviously can be used very irresponsibly. Avoid using this method
irresponsibly please. Here's a couple of rules:
Wouldn't it be cool if one app set the LEDs in the background and another did something in the foreground with captouch
and the display and all? Yes, but you cannot make sure at this point that the foreground app isn't accessing the LEDs as
well, resulting in some middle ground that is unsatisfying in the best case and epilepsy inducing in the worst. Let's not
do this.
We are actively planning to add more background callback options in a future release, which would allow for proper resource
locks and an adequate user interface to control these background tasks. Before that, please be patient, restrain yourself
and **do not use bl00mbox callbacks to change anything except for the corresponding channel**. Attempts to hijack these
callbacks for any other purpose is considered malicious.
Well, that means we can still use ``ins`` to read out captouch and just make our thing playable along with other instruments,
right? Nope, normally no. If you do subtle indirect changes to a modulation, yes, that can make sense, so we're still passing
the parameter and don't just downright block it - but consider: If menuing or navigating sound unrelated apps just plays
like a keyboard, that would be pretty annoying. **Don't make your application annoying**. The infrastructure is not quite
ready yet (just like the mixer actually can't call ``.delete()`` yet, we were lying), but at some point users will be able
to permanently block channels from running in the background. Avoid getting on that list ideally. Avoid being the person
who motivates us to release this feature sooner than later.
Now that we've set the ground rules let's do something cool: Let's add a filter hooked up to the accelerometer that
updates when you tilt the badge. This interacts with some tilt-based applications but it's not as obnoxious as retaining
captouch behavior, we'd expect it to be cool with many users. And yes: The stock *wobbler* application is derived from
this example.
.. code-block:: python
import bl00mbox
from st3m.ui import widgets
from st3m.application import Application
class WobblingBackgroundBeeper(Application):
def __init__(self, app_ctx):
super().__init__(app_ctx)
self.blm = None
self.any_playing = False
def on_enter(self, vm):
super().on_enter(vm)
if self.blm is not None:
try:
self.blm.foreground = True
except bl00mbox.ReferenceError:
self.blm = None
if self.blm is None:
self.build_synth()
def on_exit(self):
super().on_exit()
if self.any_playing:
self.blm.background_mute_override = True
else:
self.blm.delete()
self.blm = None
def build_synth(self):
self.blm = bl00mbox.Channel("Beeper")
self.blm.gain_dB += 18.5
self.mixer = self.blm.new(bl00mbox.plugins.mixer, 10)
# let's add a global filter
self.filter = self.blm.new(bl00mbox.plugins.filter)
self.filter.signals.input << self.mixer.signals.output
self.filter.signals.reso.value = 25000
self.blm.signals.line_out << self.filter.signals.output
for x in range(10):
self.mixer.signals.input_gain[x].mult = 0
beep = self.blm.new(bl00mbox.plugins.osc)
# and make it a bit lower this time
beep.signals.pitch.tone = x - 36
beep.signals.output >> self.mixer.signals.input[x]
self.tilt = widgets.Inclinometer(buffer_len = 8)
self.tilt.on_enter()
def synth_callback(ins, delta_ms):
# note: in python an inner function like this inherit the outer
# scope so that we can still access "self" in here
self.tilt.think(ins, delta_ms)
# note: tilt.pitch describes the aviation angle here, not frequency
# it's a bit silly ^w^.
pitch = self.tilt.pitch
# note: we're not accessing self.tilt.pitch again because it isn't
# cached internally and we should pay attention to making background
# callbacks as fast as possible.
if pitch is not None:
self.filter.signals.cutoff.tone = -pitch * 10 + 10
self.blm.callback = synth_callback
def think(self, ins, delta_ms):
super().think(ins, delta_ms)
self.any_playing = False
for x in range(10):
if ins.captouch.petals[x].pressed:
self.mixer.signals.input_gain[x].mult = 1
self.any_playing = True
else:
self.mixer.signals.input_gain[x].mult = 0
def get_help(self):
ret = ( "Simple synthesizer, each petal plays a different note. "
"Tilt to change filter cutoff frequency. "
"If you exit while holding a petal that note continues "
"playing in the background to allow for drones." )
return ret
Performance
-----------
The CPU can only do so much, and this is a hard real time environment: If an audio frame is rendered too slowly,
there will be audible glitches. While different channels can be rendered in parallel on different cores (they're
not right now but hopefully soon, optionally - this could starve other tasks though and is not a magic cure-all),
there are no plans to make any single channel render its plugins in parallel. This gives us a hard upper limit
of using 100% of one core.
We can easily find out how much our application is using in a given state: Go into the System->Settings menu and
activate ftop. This prints a CPU load report on the serial console every 5 seconds (while blocking all micropython
execution for a noticable time, which is why it's ideally turned off outside of debugging). If your app is running
with no other channel playing in the background, the audio task CPU load directly corresponds to your channel's
performance. Note that this is averaged over the entire 5 second interval, so depending on how much is going on a
CPU load of say 70% may already start "crackling" by producing a few dropouts here and there.
Optimizing CPU load of a given sound includes a lot of trial and error: Different plugins of course cause different
CPU load, but also the same type in a different configuration will perform differently. If you have a single audio
chain and it's too heavy, there's often little choice but to simplify it.
One common issue is excessive polyphony, or how many notes are playable at the same time: Above, we have naively
just tied an oscillator to each petal. For the sake of a simple example this was good enough, but we should ask
ourselves: Do we really want to play all of them at the same time? Isn't it more valuable to have more CPU available
for each note you play, without choking the entire engine including background channels if a user changes hand
position to hold a shoulder button for example? We can easily implement more intelligent system in python that
limits the amount of voices, or, alternatively, use the ``poly_squeeze`` plugin, or a mix of both - the Violin
app for example reduces its output to a single voice, which also helps with switching between notes. Furthermore,
playing a lot of notes at the same time demands a high headroom and may result in clipping. It really is for the
best to consider a hard limit there.
Another technique we can apply is paraphony, or sharing common elements between voices. Our wobbler above is
actually paraphonic: It has 10 oscillators, but they all go through a shared filter. Say our filter cutoff was
controlled by each captouch position instead, so that it would make sense to have 10 filters - the CPU load
of our humble application might very well double (guesstimate)! In that case we could ask ourselves, maybe
a shared filter with the maximum of all cutoffs can fake the job well enough? Or two in parallel, one with
max, one with min?
One extra parameter is render tree topology. See - whenever we set the input_gain of a mixer plugin to 0,
the connected oscillator is no longer being rendered to not waste CPU cycles (this can be overridden with
the ``.always_render`` attribute). Plugins that are not reachable from the line output are not rendered at
all. However, this system is not fully automatic: For example, the filter above is always rendered, even
if all inputs of the mixer before it are muted. If this chain was very long we could run into high idle loads,
which creates problems if we have multiple heavy plugin chains and want to only render one at a time: We must
be careful that rendering is properly suppressed for everything we don't need. bl00mbox will probably at some
point provide better automation there, but for the time being it is a good idea to carefully observe if idle
voices are maybe not rendered to a large degree.
Common issues
-------------
**Bugs in bl00mbox**
bl00mbox has a good amount known of bugs. If something doesn't work as you'd expect it doesn't need to be your
fault. It might be worthwhile to check out the latest documentation of that feature.
**Writing to flash causes audio glitches**
This sadly cannot be avoided due to the bus architecture of the ESP32S3 for external RAM, in which bl00mbox
plugins generally live. Music applications should only save data on the SD card.
**Channel won't get loud enough or distorts**
bl00mbox audio buffers use 16bit data. This is generally a good amount of resolution for a normalized signal,
but we're not playing a CD back here: Synthesis can be sometimes a bit unpredictable in terms of headroom and
volume levels. If you only change the volume at the channel output, you might end up distorting earlier in the
chain, either by clipping or by being too silent. It is important to keep in mind the intermediate gain levels
as well; you can optimize them by plugging an intermediate output directly in the line out for testing purposes.
Excessive polyphony can make this harder: 10 full scale voices playing without clipping at the same time means
that each may only use 1/10th of the headroom, resulting in a -20dB gain reduction compared to a single voice.
This means 10 voice polyphony normalized to our -14dB target requires compression to avoid clipping.
**How do I just get a plain piano sound?**
bl00mbox is not a soundfont player. It can sort of kind of be squeezed into that role, but it is not its primary
focus for the time being. At this stage it is best to look up (analog) synthesis techniques for the timbre you're
looking for and find out what translates well by trial and error. A wavetable synthesizer as used by the *Violin*
app may help to cut that process a bit shorter for very "clean" sounding instruments, but many have noisy or
disharmonic textures which are best emulated by experimentally determined types of modulation, it is often wise
to look up other people's work.
**All that background synthesis is nice and well, but what if I just wanna record and loop?**
Well, technically there is the sampler plugin, which currently only supports a fixed buffer size, so it's a
tradeoff between max sample length and RAM hogging, which isn't great. That doesn't mean we don't consider this
feature important, but rather that we're taking our time getting it right: There's little point in having each
music application implement its own version of this, but rather we can override the volume up/down buttons to
implement a global looper that all applications may use. This is also why we recommend against using
``st3m.application.override_os_button_volume`` for music applications. We hope to finish this feature soon,
but there's a lot of details to get right so it will take a little longer to be ready for public!
**In the example above, what if I want sound to stop when entering the mixer?**
Thing is, right now you can't really do that. If you are familiar with ``st3m.ui.View`` you might ask, why not
just call ``.on_exit()`` when opening the system menu, but unfortunately these methods are typically used for
opening/closing applications that do not use views. It's a regrettable situation, and we will rectify it soon
when we have a clear path on how to resolve the general lack of seperation between applications and views; it's
probably just gonna be some extra methods, but it is gonna take some careful planning to unravel this cleanly.
**I don't like this, can't I just use <other micropython audio engine> instead?**
The backend allows for easily adding extra engines and we're happy to take a look if you have a concrete
proposition. It's best to start with opening an issue in the firmware repository so that we can have a look
before anybody sinks any potentially futile work into hooking it up. If you just wanna do it for yourself
regardless, the backend is simple enough to hook extra engines into.
Blinky
======
Let's start cold with two examples to get a feel for the code. The upcoming `Environment` section
will show you how to run them, the `Basics` section after that will explain how the code works.
Auto Blinky
-----------
.. code-block:: python
from st3m.application import Application
import leds
class AutoBlinky(Application):
def get_help(self):
return "This app blinks all LEDs and the display automatically."
def __init__(self, app_ctx):
super().__init__(app_ctx)
self.timer_ms = 0
self.colors = (
(1,1,0),
(0,1,1),
(1,0,1),
)
self.active_color = self.colors[0]
self.blink_time_ms = 500
def think(self, ins, delta_ms):
super().think(ins, delta_ms)
self.timer_ms += delta_ms
self.timer_ms %= self.blink_time_ms * len(self.colors)
index = self.timer_ms // self.blink_time_ms
self.active_color = self.colors[index]
leds.set_all_rgb(*self.active_color)
leds.update()
def draw(self, ctx):
ctx.rgb(*self.active_color).rectangle(-120, -120, 240, 240).fill()
Note how the active color is not reset when exiting and entering the application: The object does not
get destroyed when the user exits (see ``Application.on_enter()`` in the ``st3m.application`` module).
Captouch Blinky
---------------
.. code-block:: python
from st3m.application import Application
import leds
class CaptouchBlinky(Application):
def get_help(self):
context_sensitive_help = (
"This app changes color of all LEDs and the display "
"when touching a top petal. "
)
context_sensitive_help += f"The current RGB values are {self.active_color}."
return context_sensitive_help
def __init__(self, app_ctx):
super().__init__(app_ctx)
self.colors = (
(1,1,0),
(0,1,1),
(1,0,1),
(0,1,0),
(0,0,1),
)
self.active_color = self.colors[0]
def think(self, ins, delta_ms):
super().think(ins, delta_ms)
for x in range(0, 10, 2):
if self.input.captouch.petals[x].whole.pressed:
self.active_color = self.colors[x//2]
leds.set_all_rgb(*self.active_color)
leds.update()
def draw(self, ctx):
ctx.rgb(*self.active_color).rectangle(-120, -120, 240, 240).fill()
The ``self.input`` object does edge detection here (documented in the ``st3m.input`` module).
Environment
===========
The main programming interface and language for flow3r is Python, or
rather `Micropython <https://micropython.org/>`_, which is a fairly sizeable
subset of Python that can run on microcontrollers.
Good news: if you've ever used Micropython on an ESP32, then you probably
already have all the tools required to get started. However, while the tools to
program the badge might be the same as for stock Micropython on ESP32, our APIs
are quite different.
If you haven't used Micropython, there are plenty of code examples and tutorials
on the internet. Since most rules of Python apply, the vast resources available
for it often apply just as well for many basic operations.
The best way to learn is of course trial and error, so let's learn first how to
try out code on flow3r!
Running applications
--------------------
You don't need to install much for developing applications on flow3r, in fact it
can be done only with a text editor and file manager by copying your prototypes
into the filesystem, but we highly recommend a convenience tool that allows to
run application code directly from your computer. These are the tools we tested
and are known to work:
+---------------+-----------------------+
| Tool | Platforms |
+===============+=======================+
| mpremote_ | Linux, macOS, Windows |
+---------------+-----------------------+
| `Micro REPL`_ | Android |
+---------------+-----------------------+
.. _mpremote: https://docs.micropython.org/en/latest/reference/mpremote.html
.. _`Micro REPL`: https://github.com/Ma7moud3ly/micro-repl
In the rest of these docs we'll use mpremote.
When the badge runs (for example, when you see the main menu), you can connect
it to a PC and it should appear as a serial device. On Linux systems, this
device will be usually called ``/dev/ttyACM0`` (sometimes ``/dev/ttyACM1``).
If it is not that default port, you may need to specify it manually;
For example, if you are on Linux and your flow3r came up as ``/dev/ttyACM1``,
add an ``a1`` after ``mpremote`` for any command.
After connecting your badge and making sure it runs. You should see this output:
::
$ mpremote
Connected to MicroPython at /dev/ttyACM0
Use Ctrl-] or Ctrl-x to exit this shell
[... logs here... ]
Congratulations, your toolchain setup is now complete! To run the examples from
the `Blinky` section, you simply need to do the following:
1) Save the example as a local file, let's say ``AutoBlinky`` as `example.py`.
2) Add some lines to the end of the file that tells mpremote how to run which class.
We always use ``st3m.run.run_app()`` with the class of the application (``AutoBlinky``
in this case). There's other optional arguments that we'll go into later, but for
many basic cases this is sufficient.
.. code-block:: python
if __name__ == "__main__":
import st3m.run
st3m.run.run_app(AutoBlinky)
3) Boot flow3r and let it fully start up into the menu screen.
4) Run ``mpremote run example.py``. The application should now run.
You can now edit `example.py` however you like, exit mpremote with Ctrl-C and run
``mpremote run example.py`` again to immediately try out your changes. This is the
primary development loop on flow3r. There's still a few tools and techniques that
might make your life easier, but this is the central one, and you can feel free to
take a break from reading here to play around with it :D.
*Note: Like all great hardware, flow3r sometimes needs a hard reboot. If mpremote
run stops working, just power cycle it, let it fully boot and try again.*
.. warning::
**Your flow3r is not showing up using Linux?**
To let ``mpremote`` to work properly your user needs to have access rights to ttyACM.
Quick fix: ``sudo chmod a+rw /dev/ttyACM[Your Device Id here]```
More sustainable fix: Setup an udev rule to automatically allow the logged in user to access ttyUSB
1. To use this, add the following to /etc/udev/rules.d/60-extra-acl.rules: ``KERNEL=="ttyACM[0-9]*", TAG+="udev-acl", TAG+="uaccess"``
2. Reload ``udevadm control --reload-rules && udevadm trigger``
Using the REPL
--------------
Micropython features a REPL that we can connect to from a computer. This allows
us to type in commands and check their output (and interaction with the hardware)
in realtime. Especially if you are a beginner this is a very easy way to test
snippets in a quick and interactive way.
You can then use any terminal emulator program (like picocom, GNU screen, etc) or
just mpremote to access the badge's runtime logs. Now, if you press Ctrl-C, you will
interrupt the firmware and break into a Python REPL (read-eval-print-loop) prompt:
::
Traceback (most recent call last):
File "/flash/sys/main.py", line 254, in <module>
[... snip ...]
KeyboardInterrupt:
MicroPython c48f94151-dirty on 1980-01-01; badge23 with ESP32S3
Type "help()" for more information.
>>>
The badge's display will now switch to 'In REPL' to indicate that software
execution has been interrupted and that the badge is waiting for a command over
REPL.
Congratulations! You can now use your badge as a calculator:
::
>>> 5 + 5
10
But that's not super interesting. Let's try to turn on some LEDs:
::
>>> import leds
>>> leds.set_all_rgb(0.5, 0, 0.5)
>>> leds.update()
The LEDs should now light up purple - maybe! Depending on which state flow3r was in
before this change might be very slow. It is very slow for example if you call it
directly after booting in the main menu - look again, mayhaps they have changed
by now ;)?
This is because the low level state of flow3r is not reset automatically when you
enter the REPL. During boot the operating system sets the LED driver update speed
to "very very slow" for a gradual fade-in and just leaves it at that until the
user enters an application.
This might seem annoying at first, but it allows for an important debug channel.
If you're not sure if your drivers are set up properly, you can always interrupt
your application to check them. Imagine one petal input isn't working properly,
we could check if it's maybe misconfigured by interrupting the application while
the bug is present and checking:
::
>>> import captouch
>>> conf = captouch.Config.current()
>>> conf.petals[5].logging
False
If we do want to set up the default app configuration in the REPL, we can do it
manually like this:
::
>>> from st3m import application
>>> application.setup_for_app()
Note that some APIs might not be working properly when used raw in the REPL as
mentioned in their respective documentations.
Transferring files over REPL
----------------------------
You can access the filesystem both from mpremote and of course micropython itself.
There are further commands to copy, delete, etc files.
::
$ mpremote
MicroPython c48f94151-dirty on 1980-01-01; flow3r with ESP32S3
Type "help()" for more information.
>>> import os
>>> os.listdir('/')
['flash']
>>> os.listdir('/flash/sys')
['main.py', 'st3m', '.sys-installed']
>>>
$ mpremote ls :/flash/sys
ls :/flash/sys
0 main.py
0 st3m
0 .sys-installed
.. _disk mode:
Disk Mode
---------
For larger file transfers (eg. images, sound samples, etc.) you can put the
badge into Disk Mode by selecting ``Settings -> Disk Mode`` in the badge's menu.
You can then select whether to mount the 10MiB internal flash or SD card (if
present) as a pendrive. The selected device will then appear as a pendrive on
your system, and will stay until it is ejected. The serial connection will
disconnect for the duration of the badge being in disk mode.
Disk Mode can also be enabled when the badge is in :ref:`Recovery mode`.
Using the simulator
-------------------
The flow3r badge firmware repository comes with a Python-based simulator which
allows you to run the Python part of :ref:`st3m` on your local computer, using
Python, Pygame and wasmer.
Currently the simulator supports the display, LEDs, the buttons, accelerometer
(in 2D) and some static input values from the gyroscope, temperature sensor and
pressure sensor.
It does **not** support most of the audio APIs. It also does not support
positional captouch APIs.
To set the simulator up, clone the repository and prepare a Python virtual
environment with the required packages:
::
$ git clone https://git.flow3r.garden/flow3r/flow3r-firmware
$ cd flow3r-firmware
$ python3 -m venv venv
$ venv/bin/pip install pygame requests pymad
$ venv/bin/pip install wasmer wasmer-compiler-cranelift
.. warning::
The wasmer python module from PyPI `doesn't work with Python versions 3.10 or 3.11
<https://github.com/wasmerio/wasmer-python/issues/539>`_. You will get
``ImportError: Wasmer is not available on this system`` when trying to run
the simulator.
Instead, install our `rebuilt wasmer wheels <https://flow3r.garden/tmp/wasmer-py311/>`_ using
::
venv/bin/pip install https://flow3r.garden/tmp/wasmer-py311/wasmer_compiler_cranelift-1.2.0-cp311-cp311-manylinux_2_34_x86_64.whl
venv/bin/pip install https://flow3r.garden/tmp/wasmer-py311/wasmer-1.2.0-cp311-cp311-manylinux_2_34_x86_64.whl
You can then run the simulator:
::
$ venv/bin/python sim/run.py
Grey areas near the petals and buttons can be pressed.
The 3-way switches can be controlled with keyboard keys and have a default
mapping of ``1``, ``2``, ``3`` for the left and ``8``, ``9``, ``0`` for the
right switch. This mapping can be changed by copying ``sim/config.py.default``
to ``sim/config.py`` and adjusting it to personal preference.
The simulators apps live in ``python_payload/apps`` copy you app folder in there
and it will appear in the simulators menu system.
If you want to start an app directly, simply give its name (the ``[app] -> name`` in
``flow3r.toml``) as an argument:
::
$ venv/bin/python sim/run.py Worms
.. _Fonts:
Fonts
=====
The current selection of fonts is baked into the firmware for use with :ref:`Context<ctx API>`.
Available Fonts
---------------
Following fonts are currently available (previews in size 20):
.. |font0| image:: assets/0.png
.. |font1| image:: assets/1.png
.. |font2| image:: assets/2.png
.. |font3| image:: assets/3.png
.. |font4| image:: assets/4.png
.. |font5| image:: assets/5.png
.. |font6| image:: assets/6.png
.. |font8| image:: assets/8.png
+-------------+----------------------+---------+
| Font Number | Font Name | Preview |
+=============+======================+=========+
| 0 | Arimo Regular | |font0| |
+-------------+----------------------+---------+
| 1 | Arimo Bold | |font1| |
+-------------+----------------------+---------+
| 2 | Arimo Italic | |font2| |
+-------------+----------------------+---------+
| 3 | Arimo Bold Italic | |font3| |
+-------------+----------------------+---------+
| 4 | Camp Font 1 | |font4| |
+-------------+----------------------+---------+
| 5 | Camp Font 2 | |font5| |
+-------------+----------------------+---------+
| 6 | Camp Font 3 | |font6| |
+-------------+----------------------+---------+
| 7 | Material Icons | |
+-------------+----------------------+---------+
| 8 | Comic Mono | |font8| |
+-------------+----------------------+---------+
The Camp Fonts are based on Beon Regular, Saira Stencil One and Questrial Regular.
Material Icons contains Glyphs in the range of U+E000 - U+F23B.
See header files in ``components/ctx/fonts/`` for details.
Basic Usage
-----------
To switch fonts, simply set ``ctx.font`` and refer to the font by full name:
.. code-block:: python
ctx.rgb(255, 255, 255)
ctx.move_to(0, 0)
ctx.font = "Camp Font 1"
ctx.text("flow3r")
To insert one or more icons, use Python ``\u`` escape sequences.
You can look up the code points for icons on `https://fonts.google.com/icons <https://fonts.google.com/icons>`_.
.. code-block:: python
ctx.save()
ctx.rgb(255, 255, 255)
ctx.move_to(0, 0)
ctx.font = "Material Icons"
ctx.text("\ue147 \ue1c2 \ue24e")
ctx.restore()
Adding New Fonts
----------------
To add a new font to the firmware, it must first be converted into the ctx binary format.
ctx provides the tools for this and you can set them up them with the folling commands:
.. code-block:: bash
git clone https://ctx.graphics/.git/
cd ctx.graphics
./configure.sh
make tools/ctx-fontgen
Now you can use ``ctx-fontgen`` to convert a font:
.. code-block:: bash
./tools/ctx-fontgen /path/to/ComicMono.ttf Comic_Mono latin1 > Comic-Mono.h
Note that the font name is read from the source file directly and not specified by any of the arguments.
You can find it at the end of the header file. In this case: ``#define ctx_font_Comic_Mono_name "Comic Mono"``.
The next step is to copy the header file over into ``components/ctx/fonts`` (and adding license headers, etc).
Once the file is in the fonts directory, it can be added to ``components/ctx/ctx_config.h``.
In there it needs an include and define with a new font number.
The new font is now available in the next firmware build, but the simulator needs a rebuild of the Wasm bundle.
See ``/sim/README.md`` for details.
I2C / Qwiic Expansions
====================================
Introduction
--------------
The flow3r has two footprints for an JST-SH 4-pin connector for adding electronics.
This connector follows the pinout for the I2C Qwiic / Stemma QT standard.
Qwiic was developed by Sparkfun as a simple standard pinout for I2C devices.
**Both Qwiic and Stemma QT are compatible with each other** and follow the same pinout and connector standard. The difference is mostly about level shifting when using 5V I2C hosts.
Both Sparkfun and Adafruit offer a huge amount of little breakout boards that make it very easy to connect new hardware to the flow3r.
You will also find a lot of third parties offering hardware for this standard as it has established itself as a standard I2C breakout.
It is the best way to extend your flow3r with sensors and other hardware modifications.
**The pins available can also be used for GPIO or any other bus** you desire as the ESP32-S3 has a full switch matrix and any peripheral (except Analog) can be routed to any pin.
Pinout and locations
--------------------
The footprint on the inside is populated for you with a connector and is ready to be used.
The footprint on the backside is not populated and you can solder a connector there if you want to use it or solder some wires to it directly.
Keep in mind that both footprints connect to the exact same signals. It's just for convenience to offer access in multiple places.
The pinout is as follows:
::
1. GND
2. 3V3
3. SDA (GPIO 17)
4. SCL (GPIO 45)
See the ESP32-S3 datasheet for more information about what else you can do with these pins. GPIO17 is also Analog capable.
Here is a picture of the unpopulated backside footprint with the pinout overlayed:
.. image:: assets/qwiic_pinout_overlay.jpg
:alt: The backside with the pinout overlayed. From left to right: SCL, SDA, 3V3, GND
And here the excerpt from the schematic, note that there are no fixed pull ups soldered by default so you can use it with other buses without issues.
.. image:: assets/qwiic_schematic.png
And on the inside you can find the populated connector above the ESP32-S3.
You can modify the 3D printed spacer with a bit of sand paper by either making the battery cable slot larger or by adding a new slot where desired.
Or modify the source files and print your own spacer that accommodates your needs.
.. image:: assets/qwiic_inside.jpg
Software
--------
You can directly access the I2C bus from Micropython as ``I2C(1)``. You need firmware 1.3 or newer for this (older firmware had the wrong pins assigned).
For a simple example see the I2C scanner app that comes with flow3r firmware 1.3.
For how to use I2C in Micropython see the Micropython docs: https://docs.micropython.org/en/latest/library/machine.I2C.html
I2C Scanner app: https://git.flow3r.garden/flow3r/flow3r-firmware/-/blob/main/python_payload/apps/i2c_scanner/__init__.py
There is also an example of how to adapt an existing CircuitPython library to work with the flow3r with the CO2 monitor app available on the app store.
The source code for that can be found here: https://git.flow3r.garden/timonsku/co2-monitor-scd4x
Where to get hardware
---------------------
You can get cables, connectors and hardware breakout from Sparkfun, Adafruit and many other vendors.
https://www.sparkfun.com/qwiic
https://www.adafruit.com/category/620
You can also easily buy cables and connectors from your favorite online retailer (Ebay, Amazon, Aliexpress etc) for cheap.
https://www.ebay.de/sch/i.html?_nkw=jst-sh+4pin+1.0mm
https://www.aliexpress.com/w/wholesale-jst-sh-4-pin-1.0mm.html
https://www.amazon.de/s?k=%22jst-sh%22+4+pin+1.0mm
**A good search term is "JST-SH 4-pin 1.0mm".** Many online shops will have cheap kits that have both cables and connectors for 5-8€ for a pack of 20-50.
You also get cables that adapt to breadboard friendly jumper connectors if you want to hack something together on a breadboard.
The original part number for the SMT PCB connector is **SM04B-SRSS-TB(LF)(SN)** (LCSC C160404) but you also get cheaper clones on LCSC.
There are also vertical versions but they are not fully compatible with the footprint on the flow3r as the mechanical pads are not in the same place.
.. image:: assets/qwiic-cables.jpg
Savefiles
=========
Best practices
^^^^^^^^^^^^^^
Sometimes you want an application to save its state. Following a few simple rules can make your user's life easier:
- **Save to SD card:** Saving to flash not only wears it down, but also produces nasty glitches in the audio because
it blocks hardware bus access to the synth engine's memory. This is not fixable. Because of this, we strongly
discourage applications from ever save to flash by asking nicely: Don't save to flash please. Unfortunately, many
flow3rs have shipped with bad SD cards that may make your application crash, see the `Troubleshooting` section.
Fortunately, we have a hunch that we can recognize them - the ``st3m.utils.sd_reliable()`` function returns our
best guess.
- **Use standard path for savefile directory:** Your save directory should be located on the SD card in the ``app_data``
directory. The name of your save directory should be your app store account user name and the name of your app repo,
simplified to slug representation and connected with a dash (-). For example, if your username is "Dashie" and your
app repo "Cool App" is hosted at ``https://git.flow3r.garden/dashie/cool-app/``, the standard directory path would be
``app_data/Dashie-cool-app`` (note how the user name didn't get converted to lowercase).
*We will provide nicer API for this by the next release, bear with us for now and hardcode it manually please!*
- **Files in the application directory are deleted with the app:** If your savefiles are not that big it's okay to
just keep them. However, if you keep large downloads or similar around they should be saved in the application
directory provided by ``app_ctx`` in ``Application.__init__``.
- **Account for file corruption when parsing files:** You never know when flow3r's power switch is turned off. This
may in rare cases lead to file corruption. Wrapping file opening and parsing in ``try``/``except`` will do the trick
with low effort.
Most stock apps we ship don't follow these guidelines yet due to time constraints, but we will migrate them shortly.
We're leading with bad examples. Let's show a good one instead:
Example
^^^^^^^
If you're not familiar with python, this might seem like a lot to juggle, but it can be quite easy. Let's expand the
``CaptouchBlinky`` example from the `Blinky` section to save its last state. The ``json`` module together with python
standard dicts make it easy to create pretty config files:
.. code-block:: python
import json
from st3m.utils import save_file_if_changed, mkdir_recursive
def check_color(col):
# length of col is guaranteed to be 3, no need to check
for val in col:
# type checking: raises ValueError if conversion doesn't work
val = float(val)
if val > 1 or val < 0:
# we could clamp them here instead too if we felt like it
raise ValueError
return col
class PersistentBlinky(CaptouchBlinky):
def __init__(self, app_ctx):
super().__init__(app_ctx)
# repo name: "Persistent Blinky", app store user name: "BlinkyFan"
# because we're saving on the SD card we must prefix it with "/sd/"
self.dirpath = "/sd/app_data/BlinkyFan-persistent-blinky"
self.savefile = "config.json"
self.savepath = self.dirpath + "/" + self.savefile
def on_exit(self):
super().on_exit()
# let's create a dict with rgb values!
save_data = {}
# (this could also just be a list tbh but bear with us)
save_data["red"] = self.active_color[0]
save_data["green"] = self.active_color[1]
save_data["blue"] = self.active_color[2]
try:
# if the save directory doesn't exist we must create it first.
# will raise OSError if SD card is not available.
mkdir_recursive(self.dirpath)
# this creates a nicely formatted string from our dict
save_file_content = json.dumps(save_data, indent = 4)
# it's good practice to not waste write cycles. this
# function checks first if the string is different from
# what is already written to the file and only writes
# if necessary.
save_file_if_changed(self.savepath, save_file_content)
except OSError:
print("Saving savefile failed")
def on_enter(self, vm):
super().on_enter(vm)
try:
# raises OSError if file doesn't exist or ValueError if
# decoding fails (desktop python raises a different one)
save_data = json.load(self.savepath)
# save_data access will raise KeyError if the dict contains
# no such field
col = (
save_data["red"],
save_data["green"],
save_data["blue"],
)
# check if field value is ok, else raise ValueError
# if things won't crash from corrupted values or maybe
# even types you can simplify this, don't worry :D
self.active_color = check_color(col)
except (OSError, ValueError, KeyError):
# it's nice to see on the REPL if something didn't work
print("Loading savefile failed")
.. _application_programming:
Application Programming
=======================
*NOTE: this section shows higher level application programming concepts. Please first consult the
:ref:`programming` section on how to get started.*
Basics
------
Implementing a responsive user interface on a resource constrained device which
at the same time should also output glitch free audio is not the easiest task in
the world. The flow3r application programming environment tries make it a bit
easier for you.
There are two major components to the running an app on the flower: the
:py:class:`Reactor` and at least one or more :py:class:`Responder` s.
The Reactor is a component which comes with the flow3r and takes care of all
the heavy lifting for you. It decides when it is time to draw something on the
display and it also gathers the data from a whole bunch of inputs like captouch
or the buttons for you to work with.
A responder is a software component which can get called by the Reactor and is
responsible to react to the input data and when asked draw something to the screen.
Example 1a: Display something
-------------------------------
Let's have a look at a very simple example involving a responder:
.. code-block:: python
from st3m.reactor import Responder
import st3m.run
class Example(Responder):
def __init__(self) -> None:
pass
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Paint a red square in the middle of the display
ctx.rgb(255, 0, 0).rectangle(-20, -20, 40, 40).fill()
def think(self, ins: InputState, delta_ms: int) -> None:
pass
st3m.run.run_responder(Example())
You can save this example as a Python file (e.g. example.py) and run it using
``mpremote run example.py``. It should display a red square in the middle of
the display and do nothing else.
You might already be able to guess the meaning of the three things that a responder
has to implement:
+---------------+------------------------------------------------------------+
| Function | Meaning |
+===============+============================================================+
| `__init__()` | Called once before any of the other methods is run. |
+---------------+------------------------------------------------------------+
| `draw()` | Called each time the display should be drawn. |
+---------------+------------------------------------------------------------+
| `think()` | Called regularly with the latest input and sensor readings |
+---------------+------------------------------------------------------------+
It's important to note that none of these methods is allowed take a significant
amount of time if you want the user interface of the flow3r to feel snappy. You
also need to make sure that each time `draw()` is called, everything you want
to show is drawn again. Otherwise you will experience strange flickering or other
artifacts on the screen.
Example 1b: React to input
--------------------------
If we want to react to the user, we can use the :py:class:`InputState` which got
handed to us. In this example we look at the state of the app (by default left)
shoulder button. The values for buttons contained in the input state are one of
``InputButtonState.PRESSED_LEFT``, ``PRESSED_RIGHT``, ``PRESSED_DOWN``,
``NOT_PRESSED`` - same values as in the low-level
:py:mod:`sys_buttons`.
.. code-block:: python
from st3m.reactor import Responder
import st3m.run
class Example(Responder):
def __init__(self) -> None:
self._x = -20
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Paint a red square in the middle of the display
ctx.rgb(255, 0, 0).rectangle(self._x, -20, 40, 40).fill()
def think(self, ins: InputState, delta_ms: int) -> None:
direction = ins.buttons.app
if direction == ins.buttons.PRESSED_LEFT:
self._x -= 1
elif direction == ins.buttons.PRESSED_RIGHT:
self._x += 1
st3m.run.run_responder(Example())
Try it: when you run this code, you can move the red square using the app (by
default left) shoulder button.
Example 1c: Taking time into consideration
------------------------------------------
The previous example moved the square around, but could you tell how fast it moved across
the screen? What if you wanted it to move exactly 20 pixels per second to the left
and 20 pixels per second to the right?
The `think()` method has an additional parameter we can use for this: `delta_ms`. It
represents the time which has passed since the last call to `think()`.
.. code-block:: python
from st3m.reactor import Responder
import st3m.run
class Example(Responder):
def __init__(self) -> None:
self._x = -20.
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Paint a red square in the middle of the display
ctx.rgb(255, 0, 0).rectangle(self._x, -20, 40, 40).fill()
def think(self, ins: InputState, delta_ms: int) -> None:
direction = ins.buttons.app # -1 (left), 1 (right), or 2 (pressed)
if direction == ins.buttons.PRESSED_LEFT:
self._x -= 20 * delta_ms / 1000
elif direction == ins.buttons.PRESSED_RIGHT:
self._x += 40 * delta_ms / 1000
st3m.run.run_responder(Example())
This becomes important if you need exact timings in your application,
as the Reactor makes no explicit guarantee about how often `think()` will
be called. Currently we are shooting for once every 20 milliseconds, but if something in the system
takes a bit longer to process something, this number can change from one call to the next.
Example 1d: Automatic input processing
--------------------------------------
Working on the bare state of the buttons and the captouch petals can be cumbersome and error prone.
the flow3r application framework gives you a bit of help in the form of the :py:class:`InputController`
which processes an input state and gives you higher level information about what is happening.
The `InputController` contains multiple :py:class:`Pressable` sub-objects, for
example the app/OS buttons are available as following attributes on the
`InputController`:
+-----------------------------------+--------------------------+
| Attribute on ``InputControlller`` | Meaning |
+===================================+==========================+
| ``.buttons.app.left`` | App button, pushed left |
+-----------------------------------+--------------------------+
| ``.buttons.app.middle`` | App button, pushed down |
+-----------------------------------+--------------------------+
| ``.buttons.app.right`` | App button, pushed right |
+-----------------------------------+--------------------------+
| ``.buttons.os.left`` | OS button, pushed left |
+-----------------------------------+--------------------------+
| ``.buttons.os.middle`` | OS button, pushed down |
+-----------------------------------+--------------------------+
| ``.buttons.os.right`` | OS button, pushed right |
+-----------------------------------+--------------------------+
And each `Pressable` in turn contains the following attributes, all of which are
valid within the context of a single `think()` call:
+----------------------------+--------------------------------------------------------------------+
| Attribute on ``Pressable`` | Meaning |
+============================+====================================================================+
| ``.pressed`` | Button has just started being pressed, ie. it's a Half Press down. |
+----------------------------+--------------------------------------------------------------------+
| ``.down`` | Button is being held down. |
+----------------------------+--------------------------------------------------------------------+
| ``.released`` | Button has just stopped being pressed, ie. it's a Half Press up. |
+----------------------------+--------------------------------------------------------------------+
| ``.up`` | Button is not being held down. |
+----------------------------+--------------------------------------------------------------------+
The following example shows how to properly react to single button presses without having to
think about what happens if the user presses the button for a long time. It uses the `InputController`
to detect single button presses and switches between showing a circle (by drawing a 360 deg arc) and
a square.
.. code-block:: python
from st3m.reactor import Responder
from st3m.input import InputController
from st3m.utils import tau
import st3m.run
class Example(Responder):
def __init__(self) -> None:
self.input = InputController()
self._x = -20.
self._draw_rectangle = True
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Paint a red square in the middle of the display
if self._draw_rectangle:
ctx.rgb(255, 0, 0).rectangle(self._x, -20, 40, 40).fill()
else:
ctx.rgb(255, 0, 0).arc(self._x, -20, 40, 0, tau, 0).fill()
def think(self, ins: InputState, delta_ms: int) -> None:
self.input.think(ins, delta_ms) # let the input controller to its magic
if self.input.buttons.app.middle.pressed:
self._draw_rectangle = not self._draw_rectangle
if self.input.buttons.app.left.pressed:
self._x -= 20 * delta_ms / 1000
elif self.input.buttons.app.right.pressed:
self._x += 40 * delta_ms / 1000
st3m.run.run_responder(Example())
Managing multiple views
----------------------------------------
If you want to write a more advanced application you probably also want to display more than
one screen (or view as we call them).
With just the Responder class this can become a bit tricky as it never knows when it is visible and
when it is not. It also doesn't directly allow you to launch a new screen.
To help you with that you can use a :py:class:`View` instead. It can tell you when
it becomes visible, when it is about to become inactive (invisible) and you can
also use it to bring a new screen or widget into the foreground or remove it
again from the screen.
Example 2a: Managing two views
--------------------------------
In this example we use a basic `View` to switch between to different screens using a button. One screen
shows a red square, the other one a green square. You can of course put any kind of complex processing
into the two different views. We make use of an `InputController` again to handle the button presses.
.. code-block:: python
from st3m.input import InputController
from st3m.ui.view import View
import st3m.run
class SecondScreen(View):
def __init__(self) -> None:
self.input = InputController()
self._vm = None
def on_enter(self, vm: Optional[ViewManager]) -> None:
self._vm = vm
# Ignore the button which brought us here until it is released
self.input._ignore_pressed()
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Green square
ctx.rgb(0, 255, 0).rectangle(-20, -20, 40, 40).fill()
def think(self, ins: InputState, delta_ms: int) -> None:
self.input.think(ins, delta_ms) # let the input controller to its magic
# No need to handle returning back to Example on button press - the
# flow3r's ViewManager takes care of that automatically.
class Example(View):
def __init__(self) -> None:
self.input = InputController()
self._vm = None
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Red square
ctx.rgb(255, 0, 0).rectangle(-20, -20, 40, 40).fill()
def on_enter(self, vm: Optional[ViewManager]) -> None:
self._vm = vm
self.input._ignore_pressed()
def think(self, ins: InputState, delta_ms: int) -> None:
self.input.think(ins, delta_ms) # let the input controller to its magic
if self.input.buttons.app.middle.pressed:
self._vm.push(SecondScreen())
st3m.run.run_view(Example())
Try it using `mpremote`. The right shoulder button switches between the two views. To avoid that
the still pressed button immediately closes `SecondScreen` we make us of a special method of the
`InputController` which hides the pressed button from the view until it is released again.
Example 2b: Easier view management
----------------------------------
The above code is so universal that we provide a special view which takes care
of this boilerplate: :py:class:`BaseView`. It integrated a local
`InputController` on ``self.input`` and a copy of the :py:class:`ViewManager`
which caused the View to enter on ``self.vm``.
Here is our previous example rewritten to make use of `BaseView`:
.. code-block:: python
from st3m.ui.view import BaseView
import st3m.run
class SecondScreen(BaseView):
def __init__(self) -> None:
# Remember to call super().__init__() if you implement your own
# constructor!
super().__init__()
def on_enter(self, vm: Optional[ViewManager]) -> None:
# Remember to call super().on_enter() if you implement your own
# on_enter!
super().on_enter(vm)
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Green square
ctx.rgb(0, 255, 0).rectangle(-20, -20, 40, 40).fill()
class Example(BaseView):
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Red square
ctx.rgb(255, 0, 0).rectangle(-20, -20, 40, 40).fill()
def think(self, ins: InputState, delta_ms: int) -> None:
super().think(ins, delta_ms) # Let BaseView do its thing
if self.input.buttons.app.middle.pressed:
self.vm.push(SecondScreen())
st3m.run.run_view(Example())
Writing an application for the menu system
------------------------------------------
All fine and good, you were able to write an application that you can run with `mpremote`,
but certainly you also want to run it from flow3r's menu system.
Let's introduce the final class you should actually be using for application development:
:py:class:`Application`. It builds upon `BaseView` (so you still have access to
`self.input` and `self.vm`) but additionally is made aware of an
:py:class:`ApplicationContext` on startup and can be registered into a menu.
Here is our previous code changed to use `Application` for the base of its main view:
.. code-block:: python
from st3m.application import Application, ApplicationContext
from st3m.ui.view import BaseView, ViewManager
from st3m.input import InputState
from ctx import Context
import st3m.run
from typing import Optional
class SecondScreen(BaseView):
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Green square
ctx.rgb(0, 255, 0).rectangle(-20, -20, 40, 40).fill()
class MyDemo(Application):
def __init__(self, app_ctx: ApplicationContext) -> None:
super().__init__(app_ctx)
# Ignore the app_ctx for now.
def draw(self, ctx: Context) -> None:
# Paint the background black
ctx.rgb(0, 0, 0).rectangle(-120, -120, 240, 240).fill()
# Red square
ctx.rgb(255, 0, 0).rectangle(-20, -20, 40, 40).fill()
def think(self, ins: InputState, delta_ms: int) -> None:
super().think(ins, delta_ms) # Let Application do its thing
if self.input.buttons.app.middle.pressed:
self.vm.push(SecondScreen())
if __name__ == '__main__':
# Continue to make runnable via mpremote run.
st3m.run.run_view(MyDemo(ApplicationContext()))
To add the application to the menu we are missing one more thing: a `flow3r.toml`
file which describes the application so flow3r knows where to put it in the menu system.
Together with the Python code this file forms a so called bundle
(see also :py:class:`BundleMetadata`).
::
[app]
name = "My Demo"
menu = "Apps"
[entry]
class = "MyDemo"
[metadata]
author = "You :)"
license = "pick one, LGPL/MIT maybe?"
url = "https://git.flow3r.garden/you/mydemo"
Save this as `flow3r.toml` together with the Python code as `__init__.py` in a folder (name doesn't matter)
and put that folder into the `apps` folder on your flow3r (if there is no `apps` folder visible,
there might be an `apps` folder in the `sys` folder). Restart the flow3r and it should pick up your
new application.
Distributing applications
-------------------------
*TODO*
Using the simulator
-------------------
The flow3r badge firmware repository comes with a Python-based simulator which
allows you to run the Python part of :ref:`st3m` on your local computer, using
Python, Pygame and wasmer.
Currently the simulator supports the display, LEDs, the buttons and some static
input values from the accelerometer, gyroscope, temperature sensor and pressure
sensor.
It does **not** support any audio API, and in fact currently doesn't even stub
out the relevant API methods, so it will crash when attempting to run any Music
app. It also does not support positional captouch APIs.
To set the simulator up, clone the repository and prepare a Python virtual
environment with the required packages:
::
$ git clone https://git.flow3r.garden/flow3r/flow3r-firmware
$ cd flow3r-firmware
$ python3 -m venv venv
$ venv/bin/pip install pygame wasmer wasmer-compiler-cranelift
*TODO: set up a pyproject/poetry/... file?*
You can then run the simulator:
::
$ venv/bin/python sim/run.py
Grey areas near the petals and buttons can be pressed.
The simulators apps live in `python_payload/apps` copy you app folder in there
and it will appear in the simulators menu system.
*TODO: make simulator directly run a bundle on startup when requested*
.. _assembly:
Assembly
========
We have a `video guide showing how to assemble the badge <https://media.ccc.de/v/camp2023-101-the-flow3r-badge-assembly-i>`_. For those who prefer a text version, keep reading.
Check what you've got
---------------------
Your flow3r badge should've come in a brown paper bag, in which you will find:
1. The top and bottom PCBs connected together
2. A black plastic spacer
3. A round LCD display
4. A blue battery
5. A metal battery cover
6. Two speakers
7. A bag containing: 6 M3 screws, 6 plastic feet, an adhesive sheet, an Allen wrench (Inbus), sandpaper
8. A lanyard
9. An instruction booklet
10. A 3.5 mm aux cable (to be used for badgelink/badgenet)
Prepare the top PCB
-------------------
Make sure the power switch of the badge is turned off (switch towards the badge center). Be careful with switch, it is not very sturdy and breaks easily.
Disconnect the top (pink, smaller) and bottom (white, larger) PCBs. Put the
bottom PCB aside for now.
Take the screen, and remove its protective film if you want. Then, install the
screen onto the top PCB - first by connecting the flex cable of the screen to
the corresponding connector on the board, then by seating the screen into the
hole in the middle of the top PCB.
If you want, you can use a ring of adhesive from the adhesive sheet to glue the
display to the top PCB, but this isn't required.
Take the spacer and look at it closely. You'll see one side has little notches
into which the LEDs on the top PCB go. The other side has 'speakers' written on
it. Take the spacer, and try to mate it with the top PCB. It will only go in
correctly in one of the five possible orientations. Check the distance between
the five metal standoffs on the top PCB and the spacer - it should be even and
everything should align correctly.
Take the two speakers, and for each of them take the white adhesive protector
off and glue them in the two spacer labeled as 'speakers'. They should stick
onto the PCB, with the cables facing each other in the middle of the badge.
Once the speakers are glued in, take off the spacer and route the speaker wires
to the two connectors on the PCB. The cables should go around the display. Put
the spacer back on top and make sure it mates correctly with the top PCB, and
that it doesn't catch the speaker wires.
Prepare the bottom PCB
----------------------
Put the top PCB aside and take the bottom PCB. It has two sides: the one with
components is the side that will mate with the top PCB, 'through' the spacer.
Take the battery and connect it to the battery connector on the bottom PCB.
Route the cable through the notch on the PCB to the back of the board, where the
battery will live.
Option A Direct Attachment:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Take a round piece of adhesive from the adhesive sheet and use it to glue the
battery to the back. Use the metal battery cover to make sure the battery is in
the right spot on the back of the PCB.
Option B Replaceable Attachment:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Take a round piece of adhesive from the adhesive sheet and use it to glue the
battery to the metal battery cover. Make sure the battery is oriented correctly
for the assembly of the cover.
Mating the PCBs
---------------
Now it's time to try to assemble the two PCBs together with the spacer. This can
be a bit tricky, and here are some general tips we've found useful in doing
this:
1. Look through the holes on the back PCB, making sure you see the threads of
the standoffs of the top PCB.
2. Pay extra attention to the speaker and battery cables. Especially the
battery cable! It has a little channel to route through in the spacer, make
sure it doesn't get caught in between the spacer and the bottom PCB.
3. Ensure the two board-to-board connectors are aligned. You can stare through
the side of the PCB sandwich to check if things look okay.
Take five screws and the Allen wrench and screw the two PCBs together, alongside
with the metal battery case on the bottom. Be *very* gentle with the torque -
overtightening the screws can rip off the standoffs on the top PCB!
Of course, if you're not using the badge with a battery, feel free to skip the
battery cover.
If you need help with routing the battery cable, you can try use the little
piece of sandpaper we included in the small bag.
Finishing Touches
-----------------
Thread the lanyard end bits through the top two holes in the bottom PCB
(closest to the USB port, near the edge of the boar).
Take the little rubber feet from the small bag and glue them to the bottom PCB
on the petals that do not have a screw head in them.
docs/badge/assets/fdm_print_prep.png

469 KiB

docs/badge/assets/pwr_swtch_mod.png

224 KiB

docs/badge/assets/pwr_swtch_mod_grind.png

24.1 KiB

.. _bl00mbox:
bl00mbox
==========
bl00mbox is a modular audio engine designed for the flow3r badge. It it
suitable for live coding and is best explored in a REPL for the time being.
Upcoming features
-----------------
(in no specific order)
1) Expose hardware such as captouch and IMU as pseudo-signals that plugins can subscribe to. This frees the repl for parameter manipulation while the backend takes care of the playing surface, neat for live coding.
2) Stepped value naming
Patches
-------------
In bl00mbox all sound sources live in a channel. This allows for easy
application and patch management. Ideally the OS should provide each application
with a channel instance, but you can also spawn one directly:
.. code-block:: pycon
>>> import bl00mbox
# get a channel
>>> blm = bl00mbox.Channel()
# set channel volume
>>> blm.volume = 5000
The easiest way to get sound is to use patches. These are "macroscopic" units
and can often be used without much thinking:
.. code-block:: pycon
# no enter here, press tab instead for autocomplete to see patches
>>> bl00mbox.patches.
# create a patch instance
>>> tiny = blm.new(bl00mbox.patches.tinysynth_fm)
# connect sound output to mixer of the channel
>>> tiny.signals.output = blm.mixer
# play it!
>>> tiny.signals.trigger.start()
# try autocomplete here too!
>>> tiny.
# patches come with very individual parameters!
>>> tiny.signals.waveform = 0
>>> tiny.signals.trigger.start()
Plugins
----------
We can inspect the patch we created earlier:
.. code-block:: pycon
>>> tiny
[patch] tinysynth_fm
[plugin 32] osc_fm
output [output]: 0 => input in [plugin 34] ampliverter
pitch [input/pitch]: 18367 / 0.0 semitones / 440.0Hz
waveform [input]: -1
lin_fm [input]: 0 <= output in [plugin 35] osc_fm
[plugin 33] env_adsr
output [output]: 0 => gain in [plugin 34] ampliverter
phase [output]: 0
input [input]: 32767
trigger [input/trigger]: 0
attack [ms] [input]: 20
decay [ms] [input]: 1000
sustain [ms] [input]: 0
release [input]: 100
gate [input]: 0
[plugin 34] ampliverter
output [output]: 0 ==> [channel mixer]
input [input]: 0 <= output in [plugin 32] osc_fm
gain [input]: 0 <= output in [plugin 33] env_adsr
bias [input]: 0
[plugin 35] osc_fm
output [output]: 0 => lin_fm in [plugin 32] osc_fm
pitch [input/pitch]: 21539 / 15.86 semitones / 1099.801Hz
waveform [input]: 1
lin_fm [input]: 0
The patch is actually composed of plugins and connections! Plugins are atomic signal processing
units. Each plugin has signals that can be connected to other signals. Signals can have different
properties that are listed behind their name in square brackets. For starters, each signal is
either an input or output. Connections always happen between an input and an output. Outputs
can fan out to multiple inputs, but inputs can only receive data from a single output. If no
output is connected to an input, it has a static value.
.. note::
A special case is the channel mixer (an [input] signal) which only fakes
being a bl00mbox signal and can accept multiple outputs.
Let's play around with that a bit more and create some fresh unbothered plugins:
.. code-block:: pycon
# use autocomplete to see plugins
>>> bl00mbox.plugins.
# print details about specific plugin
>>> bl00mbox.plugins.ampliverter
# create a new plugin
>>> osc = blm.new(bl00mbox.plugins.osc_fm)
>>> env = blm.new(bl00mbox.plugins.env_adsr)
You can inspect properties of the new plugins just as with a patch - in fact, many patches simply print
all their contained plugins and maybe some extra info (but that doesn't have to be the case and is up
to the patch designer).
.. note::
As of now patch designers can hide plugins within the internal structure however they like and
you kind of have to know where to find stuff. We'll come up with a better solution soon!
.. code-block:: pycon
# print general info about plugin
>>> osc
[plugin 36] osc_fm
output [output]: 0
pitch [input/pitch]: 18367 / 0.0 semitones / 440.0Hz
waveform [input]: -16000
lin_fm [input]: 0
# print info about a specific plugin signal
>>> env.signals.trigger
trigger [input/trigger]: 0
We can connect signals by using the "=" operator. The channel provides its own [input] signal for routing
audio to the audio outputs. Let's connect the oscillator to it:
.. code-block:: pycon
# assign an output to an input...
>>> env.signals.input = osc.signals.output
# ...or an input to an output!
>>> env.signals.output = blm.mixer
Earlier we saw that env.signals.trigger is of type [input/trigger]. The [trigger] type comes with a special
function to start an event:
.. code-block:: pycon
# you should hear something when calling this!
>>> env.signals.trigger.start()
If a signal is an input you can directly assign a value to it. Some signal types come with special setter
functions, for example [pitch] types support multiple abstract input concepts:
.. code-block:: pycon
# assign raw value to an input signal
>>> env.signals.sustain = 16000
# assign a abstract value to a [pitch] with signal type specific setters
>>> osc.signals.pitch.freq = 220
>>> osc.signals.pitch.tone = "Gb4"
Raw signal values range generally from -32767..32767. Since sustain is nonzero now, the tone doesn't
automatically stop after calling .start()
.. code-block:: pycon
# plays forever...
>>> env.signals.trigger.start()
# ...until you call this!
>>> env.signals.trigger.stop()
Channels
--------
As mentioned earlier all plugins live inside of a channel. It is up to bl00mbox to decide
which channels to skip and which ones to render. In this instance bl00mbox has 32 channels,
and we can get them individually:
.. code-block:: pycon
# returns specific channel
>>> chan_one = bl00mbox.Channel(1)
>>> chan_one
[channel 1: shoegaze] (foreground)
volume: 3000
plugins: 18
[channel mixer] (1 connections)
output in [plugin 1] lowpass
We have accidentially grabbed the channel used by the shoegaze application! Each application
should have its own channel(s), so in order to get a free one we'll request a free one from the
backend by skipping the number. We can also provide a name for a new channel instead.
.. note::
Do not use .Channel(<in>) in application code, it's for REPL purposes only. Each
application manages their own channel(s), so they might clear out your plugins
or drag down your performance or other kinds of nasty interferences. In fact,
only .Channel(<string>) is allowed for in the current CI of flower to enforce
applications to name their channels.
.. code-block:: pycon
# returns free or garbage channel
>>> chan_free = bl00mbox.Channel("hewwo")
>>> chan_free
[channel 3: hewwo] (foreground)
volume: 3000
plugins: 0
[channel mixer] (0 connections)
In case there's no free channel yet you get channel 31, the garbage channel. It behaves like
any other channel has a high chance to be cleared by other applications, more on that later.
Channels accept volume from 0-32767. This can be used to mix different sounds together, however
there also is an auto-foregrounding that we need to be aware of before doing that. When we requested
a free channel, bl00mbox automatically moved it to foreground. Let's look at channel 1 again:
.. code-block:: pycon
>>> chan_one
[channel 1: shoegaze]
...
Note that the (foreground) marker has disappeared. This means no audio from channel 1 is rendered at
the moment, but it is still in memory and ready to be used at any time. We have several methods of
doing so:
.. code-block:: pycon
# mark channel as foregrounded manually
>>> chan_one.foreground = True
>>> chan_one
[channel 1: shoegaze] (foreground)
...
>>> chan_free
[channel 3: hewwo]
...
# override the background mute for a channel;
# chan_free is always rendered now
>>> chan_free.background_mute_override = True
>>> chan_one
[channel 1: shoegaze] (foreground)
...
>>> chan_free
[channel 3]
# interact with channel to automatically pull it
# into foreground
>>> chan_free.new(bl00mbox.plugins.osc_fm)
>>> chan_one
[channel 1: shoegaze]
...
>>> chan_free (foreground)
[channel 3: hewwo]
What constitutes a channel interaction for auto channel foregrounding is a bit in motion at this point
and generally unreliable. For applications it is ideal to mark the channel manually when using it. When
exiting, an application should free the channel with automatically clears all plugins. A channel should
be no longer used after freeing:
.. code-block:: pycon
# this clears all plugins and sets the internal "free" marker to zero
>>> chan_one.free = True
# good practice to not accidentially use a free channel
>>> chan_one = None
Some other misc channel operations for live coding mostly:
.. code-block:: pycon
# drop all plugins
>>> chan_free.clear()
# show all non-free channels
>>> bl00mbox.Channels.print_overview()
[channel 3: hewwo] (foreground)
volume: 3000
plugins: 0
[channel mixer] (0 connections)
Radspa signal types
------------------------
Radspa is a C plugin format that all bl00mbox plugins are written in. Its main feature is that all signals
are expressed in the same manner so that every input->output connection is valid. This means that for real
world value interpretation some decoding is sometimes necessary. While the REPL informs you of these
quanta and helper functions such as .start() or .dB() help with first contact, for interfacing with these
signals with another signal some understanding is helpful. The data is represented as an int16_t stream.
[pitch] provides a logarithmic frequency input. A value of 18367 represents A440, going up by 1 represents
0.5 cent or 1/2400 octaves or a factor of 2^(1/2400). Special methods: .tone can be set to (float) semitones
distance from A440 or a note name such as "F#4", .freq can be set to a value in Hz
[gain] provides a linear volume input. A value of 4096 represents unity gain. Special methods: .mult is linear
and represents unity gain as 1, .dB is 20*log_10(x) and represents unity gain as 0.
[trigger] provides an input for note start/stop events. A start event with a given velocity (midi term, think
loudness, 1..32767) from a stopped state is encoded by a permanent signal change to the velocity value. A
restart from this "running" state is encoded as permanently flipping the signal sign, i.e to [-1..-32676] and
back to [1..32767] on the next restart. A change from nonzero to zero encodes a signal stop. Note: This API is
still subject to change.
Example 1: Auto bassline
------------------------
.. code-block:: pycon
>>> import bl00mbox
>>> blm = bl00mbox.Channel()
>>> blm.volume = 10000
>>> osc1 = blm.new(bl00mbox.plugins.osc_fm)
>>> env1 = blm.new(bl00mbox.plugins.env_adsr)
>>> env1.signals.output = blm.mixer
>>> env1.signals.input = osc1.signals.output
>>> osc2 = blm.new(bl00mbox.plugins.osc_fm)
>>> env2 = blm.new(bl00mbox.plugins.env_adsr)
>>> env2.signals.input = osc2.signals.output
>>> env2.signals.output = osc1.signals.lin_fm
>>> env1.signals.sustain = 0
>>> env2.signals.sustain = 0
>>> env1.signals.attack = 10
>>> env2.signals.attack = 100
>>> env1.signals.decay = 800
>>> env2.signals.decay = 800
>>> osc1.signals.pitch.tone = -12
>>> osc2.signals.pitch.tone = -24
>>> osc3 = blm.new(bl00mbox.plugins.osc_fm)
>>> osc3.signals.waveform = 0
>>> osc3.signals.pitch.tone = -100
>>> osc3.signals.output = env1.signals.trigger
>>> osc3.signals.output = env2.signals.trigger
>>> osc4 = blm.new(bl00mbox.plugins.osc_fm)
>>> osc4.signals.waveform = 32767
>>> osc4.signals.pitch.tone = -124
>>> amp1 = blm.new(bl00mbox.plugins.ampliverter)
>>> amp1.signals.input = osc4.signals.output
>>> amp1.signals.bias = 18376 - 2400
>>> amp1.signals.gain = 300
>>> amp1.signals.output = osc1.signals.pitch
>>> amp2 = blm.new(bl00mbox.plugins.ampliverter)
>>> amp2.signals.input = amp1.signals.output
>>> amp2.signals.bias = - 2400
>>> amp2.signals.gain = 31000
>>> amp2.signals.output = osc2.signals.pitch
>>> osc2.signals.output = blm.mixer
.. include:: <isonum.txt>
Configuration
=============
System
------
Settings
^^^^^^^^
Menu for setting various system parameters.
- ``WiFi``: Enter SSID and password here to connect to WiFi. Note: WiFi consumes a lot of system resources, if you feel like an application is struggling try turning off WiFi. Petal 0 toggles WiFi, a bottom bar shows you the status. If WiFi is active and networks have been found, they are shown in a list. To connect, select one with the app button. The keyboard works similar to T9, with multiple presses on the top petals selecting characters from their displayed list with a timeout and the bottom petals performing additional state switching and text operations. Confirm by pressing the app button down. If the connection is saved, it is shown in yellow, while connecting in blue, and when connected in green.
- ``Enable WiFi on Boot``: Will attempt to connect to known WiFi networks at boot time.
- ``Show Icons``: Displays battery voltage and USB connection status overlay in menu screens.
- ``Swap buttons``: Use right button as app button and left button as os button instead of the other way around.
- ``Touch OS``: Activate captouch navigation in the system menus. Open help in a system menu while active for navigation details.
- ``Show FPS``: Displays FPS overlay.
- ``Debug: ftop``: Prints a task cpu load and memory report every 5 seconds on the USB serial port.
- ``Touch Overlay``: If a petal is pressed the positional output is displayed in an overlay.
- ``Restore Defaults``: Restores default settings.
A settings file with more options including headphone and speaker max/min volume and volume
adjust step is on the flash filesystem at ``/settings.json``.
Graphics Mode
^^^^^^^^^^^^^
Various graphics settings. If ``lock`` is enabled applications can not override these,
else they can set it to their individual preferences at runtime.
Get Apps
^^^^^^^^
Enter the app store. Requires WiFi connection.
Disk Mode (Flash)
^^^^^^^^^^^^^^^^^
Make the flash filesystem accessible as a block device via USB. Reboots on exit.
Disk Mode (SD)
^^^^^^^^^^^^^^^^^
Make the SD card filesystem accessible as a block device via USB. Reboots on exit.
Yeet local changes
^^^^^^^^^^^^^^^^^^
Restores python payload to the state of the last firmware updatei and reboots. This excludes
settings and files not present in the original payload.
Reboot
^^^^^^
Reboot flow3r.
Setting nick and pronouns
-------------------------
You can navigate to Badge |rarr| Nick to display your nick and pronouns. If
your nick is ``flow3r``, and you have no pronouns, congratulations! You're
ready to go. Otherwise, you'll have to connect your badge to a computer and
edit a file to change your nick and pronouns.
From the main menu, navigate to System |rarr| Disk Mode (Flash). Connect your
badge to a computer, and it will appear as a mass storage device (a.k.a.
pendrive). Open the file ```nick.json`` in a text editor and change your nick,
pronouns, font sizes for nick and pronouns, and whatever else you wish. Please
note that ``pronouns`` is a list, and should be formatted as such. for example:
``"pronouns": ["aa/bb", "cc/dd"],``
For the ``nick.json`` file to appear, you must have started the Nick app at
least once.
Use ``"color": "0xffffff",`` to color your name and pronouns.
Use ``"mode": "1",`` to use a different animation mode rotating your nick based on badge orientation.
When you're done editing, unmount/eject the badge from your computer
(``umount`` on Linux is enough) and press the OS shoulder button (right shoulder unless swapped in
settings) to exit Disk Mode. Then, go to Badge |rarr| Nick to see your changes!
If the ``nick.json`` file is unparseable or otherwise gets corrupted, it will be
overwritten with the default contents on next nick app startup.
Getting started
===============
Hold your flow3r with the pink part facing towards you, and the USB port facing
upwards
.. image:: overview.svg
:width: 700px
Powering your flow3r
^^^^^^^^^^^^^^^^^^^^
The Flow3r needs electricity to run - either from a battery or over its USB port.
Once it has power available, you can turn it on by moving the right-hand side
power switch (next to the 'flow3r' label on the front of the badge) towards the
right.
You should then see the badge come to life and display 'Starting...' on the screen.
First boot calibration
^^^^^^^^^^^^^^^^^^^^^^
At the first boot flow3r needs to calibrate its captouch driver. The system will
guide you through this process. While calibration is happening, do not touch the
petals: The calibrating routine only cares about the baseline level, not the
response to touch and therefore expects no interaction.
This calibration can be repeated anytime at `System -> Settings -> Captouch Calibrator`.
Navigating the menu
^^^^^^^^^^^^^^^^^^^
The app shoulder button (left shoulder unless swapped in settings) is used to
navigate the menus of the badge. Pressing it left and right selects an option
in the menu. Pressing it down selects a menu option.
The OS shoulder button (right shoulder unless swapped in settings) can be
pressed down to quickly return 'back', either in a menu or an app.
Set volume
^^^^^^^^^^
flow3r has two built-in speakers. Their loudness can always be adjusted by
using the OS shoulder button (right shoulder unless swapped in settings), left
for lowering the volume and right for making it louder.
You can plug in a pair of headphones to the 3.5mm jack on the bottom-left petal.
The built-in speakers will then turn off and audio will go out through the
headphones. You can adjust their volume in the same way.
Use the context menus
^^^^^^^^^^^^^^^^^^^^^
flow3r has a context menu that is available at all times. Access it by holding
down the OS button for 1s. This menu provides 3 features for now:
- **help:** Applications may provide help texts to tell you how to use
them. This is a relatively recent feature so it's not widely implemented yet,
but we hope to change this soon!
- **mixer:** Some music apps can keep playing in the background after
exiting them. The mixer allows to set volume levels for those different
sources.
- **exit app:** Most apps can be exited by simply going back enough times
by tapping the OS button a bunch. Some applications may not implement this
properly though, in that case, you can always use this backup option to exit.
If you are not in an application this option is called **go home** and returns
you to the starting menu.
There is a secondary context menu which exists only when hovering over an
application in the system menus. You can access it by holding down the app
button for 1s. This menu provides 3 features:
- **fav:** Favorite apps are sorted to the top of the menu so that you can
quickly access them. They are marked with a little "<3" next to the name.
- **boot:** This app is launched automatically at startup. Only one app
can have this property. Autostart can be prevented by holding down any
shoulder button.
- **delete:** Delete the application.
Set the handedness
^^^^^^^^^^^^^^^^^^
Many applications are intended to be used while holding flow3r in one hand
and touching the petals with the other. Ideally, your holding hand should be
able to comfortably reach the app button in this mode. For operating it
with the right index finger for example we personally find it most comfortable
to have the app button on the right side.
You can change this setting depending on your preferences in
`System -> Settings -> Swap Buttons`.
Making applications that work well for either handedness requires some degree
of cooperation from application developers so app store experience may vary.
Use apps
^^^^^^^^
The main menu shows a few different app categories, plus the `System` submenu.
Those are:
- **Badge:** These apps are intended to be used as a name tag for events. They
usually need you to configure your personal display data as described in the
`Configuration` section.
- **Music:** These apps are musical instruments and typically produce sound in
some or the other way.
- **Media:** These apps replay media.
- **Games:** Games of all sorts. No stock application is in this category as of
yet so it is hidden until you download one.
- **Apps:** Catchall for apps that don't really fit in any other categories.
If it's unclear how an application is supposed to work, try the **help**
option in the context menu.
To exit an application, press the OS button down (maybe repeatedly), or fall
back to the context menu **exit app** option if this fails.
Music applications may continue playing in the background after they have
been exited; there is typically a means to avoid that which should be listed
in the help menu, but if not you can use the context menu **mixer** to mute
them.
Download new apps
^^^^^^^^^^^^^^^^^
You can find many community created apps in `System -> Get Apps`. You need an
active WiFi connection to access the app store. The WiFi menu will open
automatically if none is found.
Not all apps in the app store may be fully functional as flow3r firmware
evolved a lot since it was first released.