It is a microcomputer to render data, in this case mainly telemetry data for aeromodellers.
I think such devices will gain increasing importance for RC equipments. Here I describe my development, an extra box (not really a box but a shrunk microcontroller device), for installation in a Multiplex® ROYALpro RC transmitter to increase its functionality and to try out new ideas. A first version with still limited functionality is ready now and shall be tested during the coming season. It can read all standard and one format of custom telemetry data from the MLink® module, can execute few evaluations and application code and can generate output data. It renders data using speech, vario- and shepard-tones and, of course, my haptic output.
The state of the art in this field includes audible output (speech and sounds) and displays mounted in RC-transmitter cases or a vibration alarm in a stick.
Earlier I had implemented a haptic output by mounting small servos in the transmitter case and driving levers which can be scanned with the ring fingers - I use one of them mainly for the variometer information and I do not want to miss it any more. This machinery works but it is inflexible and I reimplement it and thereby improve it substantially by enhancing its...
- Functionality - more and more adequate render techniques than are in use today: I think, data acquired and transmitted with aeromodeller's telemetry will increase as soon as it will be seen that an aeromodel's potential is better utilized when the pilot has more information regarding the flight state of his or her model. More data brings the danger of overloading the model pilot and adequate rendering will be more important than today.
- Flexibility - allow the user to add or change render methods and determine what to render: Most aeroglider pilots use the variometer tone and most of them like it - others, perhaps those who suffer from tinnitus, could become nervous of the vario tone and would prefer a visual rendering of the climbing of their gliders - this is only a very simple example. Diversity in rendering methods could bring telemetry and its benefits nearer to individual persons. Also flying for fun can be different to flying in a competition and demands for data and their representation will be different from case to case even for the same person. Of course different models may need different choices of data to be rendered.
- Intelligence potential - allow to add "applications" to process telemetry- (and other) data to support the model pilot in ways which are not used or even may not be known today. An example is pattern flying (GPS triangle, model OLC gliding): The challenge is to guide the pilot through the flight path(s) with limited IO-resources (or waiting and more waiting on usable data glasses) - I would really be interested in the most usable and useful OLC-application running on the render engine... A far less spectacular but also very useful idea would be to implement more elaborate mechanisms to decide →when to render data, e.g. to tell the pilot the altitude of his glider immediately after the high start - I think there are better ways than just "from time to time".
- Independence - allow the device to be used with several RC equipments of different brands, I think this is a trivial requirement but it needs other people to contribute drivers.
The idea is to design a device that fits into the RC-transmitter case, which reads the telemetry data stream, analyses it, performs some processing and renders the results in several appropriate ways. There might be three types of persons enjoying the render engine: The ones who just use it, the ones who write applications using the "platform render engine", and of course those who bring it on its way and extend it - perhaps I'm only the first of these.Basic requirements
Requirements are to be formulated for the model pilot, the experimenter (application programmer) and the others, including myself, who might come and adapt or extend the engine substantially. The following is a bit unordered:
The applications, the code to evaluate, analyse and process the data and to generate data to be rendered, may be from "very simple" up to "somehow complex". Their usefulness might be low (e.g. a flight log) up to indispensable for the one or the other model pilot and the computing intensiveness may be wide spread. Most applications may be not really time critical, others are. It should be possible to use real time operating systems on powerful hardware or just a main loop for small devices. Rendering devices may really be very different, spanning from traditional audible signalisation and speech output, haptic and visible output of different kinds to even graphic displays in the upcoming data glasses.
Besides reading the telemetry data stream the engine shall be able to acquire data by itself: partially reference data (e.g. air pressure) or data of the user (e.g. switch settings or the "heading" of the transmitter case). Other data are semi constant and shall be defined by the user as operational parameters, for instance the capacity of the battery in a specific aeromodel.
"Independence" and "..catch the telemetry
data stream" means that there must be a unique driver API so that different drivers
for different RC-transmitter types and brands can be used and consequently that there must be
a unified representation of telemetry data such that evaluation, analysis and rendering can
be done independently of RC brands.
"Unified representation" includes a definition of the semantics of aeromodelling telemetry data, I call it "vocabulary", and usage of appropriate data formats, rules, templates and services or auxiliary programs. I designed some or these things: a simple driver API, a vocabulary and a data structure for telemetry data items, a fixed point number format for those data which can even be used efficiently with today's low end processors (Cortex-M0) and a flexible mechanism (version 0.0.1 :-) for dealing with operational parameters. I shall describe them later.
As an application should not depend on specific render methods some abstraction and classification of render methods is needed.
To summarize all this - the render engine consists of
- some input machinery, including a link to the RC transmitter's telemetry data stream and few own sensors and drivers for them,
- in many if not most cases some external storage,
- resources for evaluation, analysis etc. and
- links to rendering devices (or some rendering devices are directly built in like a sound generator), and
- some "platform" functionality, rules and patterns.
Here it is:
This is one of the usual block diagrams showing the render engine and something important around it. Yes, currently it does not contain a network link: For the next few years, possibly very few years, render engines will be alone and offline (and I do not consider a serial Bluetooth-connection to a smart phone or tablet as a network). To the left there is the input section and the external storage, the centre is dominated by the application. The three shades of grey stand for hardware abstraction, platform stuff and, last but not least, the application, the most interesting code which will change very often and where most experiments will take place. To the right you can see some icons representing current or imaginable render devices and others which still must be invented. All this is straight forward. I leave the question "RTOS or MainLoop?" open here, currently I use a main loop.
The input driver pushes telemetry data: It controls in so far the main loop as at least any incoming telemetry data item causes one run of the loop. In an RTOS it would have to run in an own thread and push the telemetry data items into a queue. The drivers of the switches, controls or own sensors (e.g. a compass) just read and store their values regularly and getter functions are available. The (future) file system is a standard one.
The output drivers have physical and logical layers. Most physical devices can be used for
different logical devices - a complex example is (low level) audio output, which can be controlled
by different logical level drivers:
A gauge can be implemented as a variometer tone (up to 2 octaves represent an interval
of values), a clock display (see below) can be implemented as a shepard tone, and a display is,
of course, a speech output. Another example: A a standard RC servo can be used to implement a gauge
in 2 different ways: a haptic output or a gauge display with a quite large, easy to read display.
Which value is to be output on what display is controlled by the application code but platform code is made available to do this in a unified way and can be controlled by operational parameters, see "plumbing" below.
Currently the main classes for render devices could be
- "gauge": It is characterized by a limited range of values, an interval defined by its lower and upper end and, in many cases, a midpoint - renderable as a variometer tone, haptically or graphically, useful for variometers, inclinometers, voltage or current monitoring etc.,
- "clock": One clock display represents an interval of values, but more those intervals can be stacked - just like the hands of a (real) clock or the 100-feet-pointer of a traditional altimeter. It would be a funny API which would allow to stack (or cascade) several clocks, but currently this must be controlled by the application code. The different clocks of a complex display can be quite similar (as usual with clock displays) or they can be of very different nature - an extreme example: The "inner" clock of an altimeter is implemented as a shepard tone (with a resolution of 0.125 m, covering 8m altitude, see below) and the outer hand is a spoken altitude in meter.
- "Text", e.g. altitudes, including physical units, to be rendered as text in a display or to be spoken or... <to be invented>,
- "Indicators" (LEDs, could also be implemented additionally on the audio as an alarm sound, spoken words and so on).
The plumbing: This is simply a table containing the information which data item is to be rendered by which real (logical) device. I will explain it below in the parameters section.mbed and the OM11043 board
Of course at first I thought about an embedded Linux system, it sounds "modern". There are some tiny designs (e.g. →Arietta-G25, the →40-Pin-DIL-GNUBLIN) and other "gumstick sized" quite powerful systems. If I won't choose embedded Linux I will be asked why. There are some convincing pros:
- Linux is well structured, writing drivers for new rendering devices is awfull but there are quite well defined rules.
- Writing applications for Linux is easy as long as there are no complex UIs to be programmed. Installing applications is really easy for a system like the one sketched above; even compiling and deploying is mainly putting them on an SD card and MAKEing it locally, not "cross".
- Also applications would be easy to write because they are single processes, one not disturbing the others.
- A very inviting advantage of Linux is that it may be a bit simpler as for other environments to set up a corresponding edition for the PC such that applications can be developed quite a while on a PC with all the advantages before going too deep into details. Debugging doesn't need, as far as I know, extra hardware besides a serial line for the adb.
- Linux is, beginning at a reasonable hardware performance, upward scaleable.
- Linux is state of the art, it is "in" and it "has future".
- As indicated few lines above: Linux needs some minimum performance. Hardware, which I want to be usable, isn't large and fast enough for Linux. Linux is scaleable, but not far enough downward.
- Todays Linux systems need some energy. 500mA is a minimum, otherwise the system is really too slow or too small. The system I chose for my first steps takes (at full speed) 100mA, without haptics, and this is already too much.
- Setting up an embedded Linux system for a specific platform is ok for an experienced Linux nerd, but not for me - the learning curve would be steep and, as often is overseen, would have to climb very high... I must be honest.
- There are some technical things which would have to be solved: It takes a lot of time until a Linux system starts up and, more annoying, Linux must not simply be shut down by switching off power - but this is the way RC equipment is expected to work. Nobody wants to initiate a shut down and to wait until it has finished before he (...or she) may switch it off. Technical measures against this are, at least, cumbersome.
32-Bit-Arduino? mbed? Bare metal?
Simple 8-Bit-Programming with standard boards is the domain of Arduino and also 32-Bit-Arduinos are available; all 32-Bit-Arduinos are based on an ARM-Cortex-Mx MPU but I don't know if they are compatible (I'm afraid they aren't). The →Teensy is in my eyes one of the most attractive designs. It is very easy to write <very simple> Arduino applications, it is much more cumbersome to write <not very simple> Arduino applications. The main problem with Arduino is, this is not new, the absence of a debugger. There is not only no debugger forseen, it is essentially not possible to attach a debugger without changing the hardware and afterwards it is not longer an Arduino. See what is to be done with the Teensy to use a debugger in →Prof. Erich Styger's blog. I think developping hardware abstraction or platform software for such an Arduino is not possible.
mbed is more than Arduino but shares some things:
It is also possible to develop <very simple> application software for an mbed system
when "serial-out-debugging" will do for you: Let the online compiler do the work and
store the resulting binary file in the hardware, which appears as a USB storage
device. Then let it run and watch the printfs on a terminal application. Good luck!
Writing more complex or complicated software this way is also not possible but... and this is the point, if you want more any mbed compliant piece of hardware may be attached to a debugger and you may use a locally installed development system from the vendor of your hardware.
The offers of mbed compatible hardware is increasing, but nearly all
boards are a combination of a debugger and a discovery/experimental/prototyping board;
the debugger may be separated from the prototype board, but these boards never
fulfill even modest perceptions of any form factor - simply unusable as a render engine,
but they of course never were made for that. You may get such boards for very
low prices and they are a good starting point.
You may also design your own hardware which will, together with the mbed-debugger/downloader, be mbed compatible.
Directly usable mbed-hardware is available in many sizes and flavours, see the →Platform list on the mbed pages. I decided to use the →OM11043 board, which has a broadened 40-pin-DIL form factor and will fit into my transmitter case. It's heart is an LPC1768, a 72-MHz clocked Cortex-M3 with 512kB flash and 32kB SRAM and a 10-Bit DAC etc., the board has a bit of external storage and few blinking lights (oh I like that:-) It shall be extended with an audio amplifier, a battery for the clock and I'll use its I²C bus later for some "ground station" reference sensors (atmospheric pressure, compass) and for more advanced render devices.
The plan was: Use the mbed platform as far as possible and when real problems arise
continue with the LPCxpresse tools for debugging. After installation this didn't work,
I hate that, and I had not time and skills enough to make it running...
But the other approach worked surprisingly well: Develop as much as ever possible on the PC using Eclipse-Juno and the MinGW (32 Bit) compiler and port it onto the ARM. This also causes me to use proper test frames for my modules. In fact even the audio output was developed nearly fully on the PC, only the generated samples went into a .wav file instead of the DAC. Of course the timing (interrupt servicing for the samples) cannot be simulated on a PC, but the corresponding problems were not really hard.
This encourages me to set up a simple platform on the PC which allows to develop application code for the render engine under Windows or (PC-) Linux in the future.
The vocabulary: Independence of brands means, among other, to set up a common language such that we know what we are talking about. Telemetry data types must be defined and a telemetry data item must be tagged with this type definition. So I defined a list of types of data items in aeromodel telemetry. It is written down as an xml file, currently I can generate C-#defines of it, more is planned. I call this list "vocabulary" and it maps some technical data items into numerical codes. The mapping is organized in 3 steps: groups, items and subitems or indexes.
- Groups: These are categories of data types which belong together in some sense and may be subdivided. An example for a group is "TDEL_SUPPLY" which means items describing the state of the electric supply (not electric drive) in an aeromodel. There may be several groups which contain same or very similar items, see below.
- An item is the information about a physical value, for example a voltage.
- Index or subitem: Many items may be measured at more than one places in an aeromodel, e.g. the voltages across different cells of a battery or the revolutions per minute at up to 4 positions. Therefore they must be distinguished by an index. Other items are unique for a model, for instance its air speed, but it can or must be detailed additionally - the air speed may be an IAS, EAS, CAS, TAS (true air speed) or even a ground speed measured using GPS equipment. This is detailed using the index.
Groups TDEL_SUPPLY or TDEL_DRIVE and others can contain "contain" items:
TD_VOLTAGE, TD_CURRENT, TD_TEMPERATURE, TD_CONSUMED and others. For items it is clearly defined what they are and in which physical units they are measured (let's use SI-units instead of feet, PSI etc. as far as possible).
The items may be indexed using the lowest 4 bits, some of the possible values are used for special instances, for example TD_MAXCURRENT is the maximum current measured since reset, as some sensors produce.
The groups-, item- and index- or subitem-codes are combined simply by or-ing them, e.g. TDEL_DRIVE | TD_TEMPERATURE.
TDNAV_VEHICLE groups navigational items concerning the aeromodel, TDNAV_REFERENCE the same items for a reference station, usually the RC-transmitter. Navigation items include:
TD_ALTUTUDE_MSL or TD_ALTUTUDE_REL. "..._REL" means "relative" and is defined as sharp as possible: Normally it refers to the point where the last reset of the sensor was performed and it must be guaranteed that the value 0 for TDNAV_VEHICLE | TD_ALTUTUDE_REL and TDNAV_REFERENCE | TD_ALTUTUDE_REL refer to the same point. If TDNAV_REFERENCE | TD_ALTUTUDE_REL is not available it is assumed to be =0 and TDNAV_VEHICLE | TD_ALTUTUDE_REL minus TDNAV_REFERENCE | TD_ALTUTUDE_REL can always be seen as "altitude above ground" (AGL), as precise as possible. TDNAV_REFERENCE | TD_ALTUTUDE_REL is currently not acquired but I plan to include a pressure sensor into a later version of my render engine implementation. There exists also a type TDENV_GROUND | TD_PRESSURE_PSTAT, the static pressure at the RC transmitter, it will come out of the same sensor as TDNAV_REFERENCE | TD_ALTUTUDE_REL and may be used for further corrections of other telemetry data (e.g. IAS→TAS) or for watching the weather during model flying or measuring the temperature gradient or such stuff.
The fixed point data type q1516:
C++ allows to let classes look closely like primitive data types and I could not
resist to write a class for a very usable data type. It was a good finger-warming
after such a long, long pause in C++ coding. I'm sure it is the (n+2)nd
invention if this usable number format. Ok, it's not only fun:
Render engines may be implemented on really low end MPUs as Cortex M0 kernels and a common, unified way to treat numerics is important. Although it is additional work to convert telemetry data from brand specific formats into the unified format it is a major benefit for application programmers.
q1516 is a fixed decimal point number with a sign bit, 15 bits before and 16 bits after the decimal point, this allows "at least" 4 decimals before and "4½" decimal digits after the decimal point. It fits into one 32-bit-word, is quite easy to use and does not bring even the smallest Cortex-M0 into problems. Such numbers are sufficient to represent nearly every value which might occur in the field of aeromodel telemetry - an exception might be an absolute WGS84-coordinate which should be represented by 2 float numbers - in this case usually also more complex number crunching, including trigonometry, is used and this should be done with M4-kernels. Besides overloading the usual operators and conversion methods from/to signed integer and float there is a method toString() to make such a number readable.
The next piece is a first version of storing, looking up and managing operational
parameters. Such a parameter consists of a key and its value, it is not to be altered by
the render engine's software. Sets of parameters are combined to tables, tables are organized
in a parametry-tree.
Managing such a tree, keeping the information together, storing the tree at specific addresses, e.g. a fixed address in the flash memory, such that they can be loaded separately and their management outside the render engine hardware is a bit tedious and there are some class definitions, routines and an auxiliary program running on the PC.
A parametry tree is stored on a PC as an .xml file, its structure can be enforced using a corresponding .xsd schema definition, such that, for example, XML-Notepad can be used to maintain parameter sets. A Windows program translates the xml-structured parameters and the vocabulary into several formats. The most important format is an unstructured "byte soup" which can be loaded into the mbed render engine. Inside the render engine the byte soup is interpreted as a tree of parameter tables, all this is not really beautiful C++ code, but it works and the parameters are always present and can even be used during construction of static objects. Currently tables may contain pairs of 32-bit-key/32-bit-value or 32-bit-key/q1516-value and 16-bit-key/16-bit-value.
Here is an excerpt of a parametry file:
Application code can be controlled by operational parameters, for instance alarms to be emitted may be specified: My MiniMach will be close to its visibility limit when flown higher than 250 meters.
The "plumbing" is an essential part of the parameters: These tables map telemetry data types to the available logical and physical output devices. An example: After reception, evaluation, correction etc. the application code has computed a variable of the type TDNAV_VEHICLE | TD_SPEED_CLIMB_COMP and decides to render it. The plumbing mechanism allows to render such a data type on a logical gauge object (among others) and the parametry contains the information to render it on the device "left_lever" (haptic output on the left side). The physical devices which are available within a current render engine implementation are made available as names for the device specification. It is, of course, to be extended in the near future to allow to control the behaviour of the render classes with more detail, e.g. control when to speak a text or to specify non linear characteristics of gauges.
My first implementation of the render engine, for example, has the physical rendering devices left_lever, shepardTone, speech and varioTone, which are of the logical classes gauge, clock, text and gauge respectively. It is an absolute minimal configuration, but it will grow - work is in progress.
Audio output: As already mentioned I could develop the audio
output - sound generator and speech output - nearly fully on the PC. There are some sound
generator classes for a vario tone, a shepard tone, simple alarm beepers and, last but
not least, silence. Tones are generated by scanning a sinus table in steps according to the
frequency needed. The vario tone generator delivers tones of 2 octaves, the upper one for climbing
and the lower one for descending. There are tables for controlling the tone generation for
64 different tones (as such a fine division leads to different tones where 2 adjacent tones
are hardly to distinguish). Assuming a resolution of 0.1 m/s for the climb signal this
allows signalisation for climbing and descend up to ± 6.3 m/s, what is regarded
sufficient. Alarm beeps are chosen out of one octave (the upper one of the vario tone octaves)
and the beat of 2 narrowly separated frequencies generates some shrill.
Shepard tones are a bit more complicated. They are used to build an audible clock... A clock is an interval of values which can be perpetuated at both ends - like the hours of a (half) day. Altitudes are subdivided in bands, a clock can represent any of these bands. This allows to render altitudes in high resolution to replace the vario tones during tiny or irregular climbs and descends. The price is obvious: Such a "clock" is ambiguous, it is useless for quick climbing or descends. An acoustic "clock" is defined by one octave - when the ascending tone reaches the upper end of the octave it re-enters the octave at its lower end, a similar sounding tone (half the frequency of the upper end). To avoid the rest of acoustic discontinuity 2 tones of 2 adjacent octaves are mixed in a way that there is no tone that doesn't sound exactly like its counterpart one octave lower. →Roger Shepard mixed many sine components, not only 2, to complex tones forming virtually infinitely ascending or descending tone scales. Such a tone is ideal to trace low or irregular climb or descend in weak thermals, better than classic vario tones. The variometer/altimeter device in the aero model, which I use, can deliver the altitude with a resolution of 0.1 m/s or 0.125 m/s and 64 different shepard tones of one octave can cover 6.4 m or 8 m, which is considered as an ideal width of an altitude band for the mentioned purpose.
Here is a .wav file which contains a tone sweep over 2½ octaves up and down: ⇒Shep32Tones.wav, it is about 200kB long.
Speech output is governed by the following requirements/restrictions: The LPC1768 MPU has a DAC with a resolution of 10 bits and a flash rom of ½ MB (100 kb of which are considered enough for the program code). This should suffice for the needed speech clips. The OM11043 board has a very limited 2 MB external store which I do not want to use for speech clips. About 25 different words, including numbers 0..12, shall be used. I intentionally do not create clips like "twenty-four" (German "vier-und-zwanzig", even longer) for number output, I prefer the easier to understand "two four". At least 11025 SpS must be used to achieve a minimum of understandability. I chose to use a compression method very similar to the A-law (or also μ-law) compression especially for 16→8→10-bit. Compression is done on the PC, decompression is a simple table lookup. Currently the compressed clips are part of the source code (:-), I will change this later. Speech quality is less than ideal, I still have to learn a lot. The low pass / audio amplifier is of the low quality, high noise type based on an LM386. I'm not proud of it.
The telemetry data item class and the input driver for the MLink system:
A first draft of a universal data structure for a complete telemetry data item is designed.
It contains the data item's type (e.g. TDNAV_VEHICLE | TD_SPEED_CLIMB), its value (e.g.
the q1516-value 1.8 - the unit m/s is defined by the type of the item), some flags (e.g.
if the MLink sensor attached an alarm bit to the value), and a time stamp.
The MLink input driver consists of 2 layers: The low level driver reads the data stream from the RC equipment and the high level driver converts the data to above mentioned telemetry data items. It is ready and works reliably.
Surprise: The mbed platform hasn't implemented the C++ exception handling.
No, you can't switch it off if you don't need it, you even can't switch it on if you need it, you simply can't have it. It may sound naive but I didn't search for such traps when I looked for the right platform, and now it was too late.
The mbed guys are surely excellent software engineers and when they say that the C++ exception handling is too expensive I have to accept this. But I don't like it and I especially do not like such surprises - this information should be clearly visible in easy-to-find feature lists and not in a compiler diagnostic while porting the q1516 class. I have to use a very limited setjmp/longjmp version. The fact, that no unwinding and no destructor calling is done is not really wonderful but the solution is better than nothing.
The RTTI (run time type information) is also not implemented for mbed, but this is a minor problem and can, as far as I need it, easily be replaced.