The mobile radio tomography framework provides tools for performing mobile radio tomographic imaging using RF sensors mounted on unmanned vehicles such as rover cars or drones. This framework is the result of research projects and master theses by Tim van der Meij (@timvandermeij) and Leon Helwerda (@lhelwerd). The research is performed in collaboration with Leiden University and CWI Amsterdam, both located in the Netherlands.
In order to use the framework, you must have the following software installed on your system. The framework has been developed for Linux, but can be made to work on Windows or any other operating system since all prerequisites are also available for those systems, perhaps with slightly different installation procedures.
- Git
- Binaries and development headers for the LIRC package for remote control support. Check whether and how your package manager provides these packages, otherwise you can retrieve them from the LIRC website itself.
- Python 2.7. At least version 2.7.7 is required. Note that Python 3 cannot be used at this moment.
- SIP and
PyQt4. These may be
available from package managers, but are not available through
pip
. pip
for Python 2.7.pip
is often not available on extremely old and bare systems. If it is not delivered by a package manager, one can also install it with get-pip.py. Ensure that you have the correct version ofpip
withpip2 --version
. See the Python packages section below for installing the required packages usingpip
.- ArduPilot for vehicle simulation. See the ArduPilot section below for more details.
For all commands in this file, replace python2
with python
, and pip2
with
pip
if your operating system does not need to distinguish between different
versions of Python, e.g., Python 2 and Python 3.
Use pip2 install --user <package>
to install or upgrade each of the following
packages, or pip2 install -r requirements.txt
to install all of them in one
go. The packages are sorted by purpose as follows:
- General packages:
- matplotlib
- NumPy
- scipy
- enum34
- Control panel:
- PyQtGraph
- markdown
- py-gfm
- Physical sensor/communication interfaces:
- pyserial
- RPi.GPIO
- wiringpi
- xbee
- pylirc2
- pyudev
- Vehicle trajectory mission interfaces:
- lxml
- pexpect
- pymavlink
- mavproxy
- dronekit
- Environment simulation:
- PyOpenGL
- simpleparse
- PyVRML97 (you may need to use
pip2 install --user "PyVRML97==2.3.0b1"
) - PyDispatcher
- pyglet
- Testing:
- mock
- coverage
- pylint
Download the latest code using:
$ git clone https://github.com/diydrones/ardupilot.git
Then, add the following line to your ~/.bashrc
:
export PATH=$PATH:$HOME/ardupilot/Tools/autotest
In order to use the map display of ArduPilot, make sure that OpenCV and wxWidgets as well as their respective Python bindings are installed and available. If not, the following directions might help you get it:
- OpenCV: This is sometimes provided by the package manager. It can also be
installed from the official download
using the appropriate
documentation.
Note that for Linux, you must change the install prefix for
cmake
if you do not have superuser rights. You can speed up the installation by passing-j4
to thecmake
command. - wxWidgets: Again, if this is not provided by the package manager, see an
explanation on how
to install from source. This requires wxGTK as well as the wxWidgets library
itself: these are combined within
a download. You can install without
superuser rights using
./configure --with-gtk --prefix=$HOME/.local
.
The first step is to clone the repository to obtain a local copy of the code. Open a terminal and run the following commands.
$ git clone https://github.com/timvandermeij/mobile-radio-tomography.git
$ cd mobile-radio-tomography
Now that we have a copy of the software, we can run the tools. Use sudo
in
front of commands if your user is not part of the dialout
or uucp
group.
The XBee configurator is used to quickly prepare all XBee sensors in the
network. Launch the configurator with python2 xbee_configurator.py
to
get started. You might need to adjust the settings for the xbee_configurator
component in settings.json
, for example to set the right port if the
default port is not correct (or use the command line options). After starting
the tool, the instructions for configuring each sensor are displayed on the
screen. The tool takes care of setting all required parameters.
The trajectory mission sets up an unmanned aerial vehicle (UAV) and directs it to move and rotate within its environment. The script supports various mission types and simulation modes. You can run it using the ArduPilot simulator with the following commands:
$ sim_vehicle.sh -v ArduCopter --map
One can also use different vehicle types, such as APMrover2 for a ground rover. Then start the mission script using the following command in another terminal:
$ python2 mission_basic.py --vehicle Dronekit_Vehicle
This starts the mission with default settings from settings.json
. The
ArduPilot simulator provides an overhead map showing the copter's position.
The mission monitor has a map in memory that shows objects in the environment
during simulation as well as detected points from a distance sensor. It also
provides a 3D viewer of the simulated objects.
You can also start the mission monitor without ArduPilot simulation using
python2 mission_basic.py
. In this case, the vehicle is simulated on our own,
and no overhead map is provided other than the memory map. The command allows
changing settings from their defaults using arguments. You can provide a VRML
scene file to retrieve simulated objects from using the --scenefile
option,
change the geometry from a spherical coordinate system (Geometry_Spherical
)
to a flat meter-based coordinate system using --geometry-class Geometry
, or
set sensor positioning angles, for example --sensors 0 90 -90
. Many other
options are available for simulating various missions and sensor setups, and
the command python2 mission_basic.py --help
provides a list of them. The most
important setting might be the mission class to use for calculating what
trajectory to take. You can choose one of the classes in trajectory/Mission.py
using --mission-class <Mission_Name>
.
This tool allows you to work with all supported RF sensor classes. It is
possible to start simulated RF sensors as well as physical RF sensors such as
XBee devices or Texas Instruments devices. Start the tool with python2 rf_sensor.py [class_name] [arguments]
. For example, to create a simulated
sensor network, open three terminals and run the following commands:
- In terminal 1:
python2 rf_sensor.py RF_Sensor_Simulator --rf-sensor-id 0
- In terminal 2:
python2 rf_sensor.py RF_Sensor_Simulator --rf-sensor-id 1
- In terminal 3:
python2 rf_sensor.py RF_Sensor_Simulator --rf-sensor-id 2
You should see packets being output in each terminal window. The simulation mode is especially useful for debugging and scheduling research, while the physical mode is primarily used for performing signal strength measurements.
We assume that you have setup a Raspberry Pi with Arch Linux ARM and
that you have connected the HC-SR04 sensor. This tool must run on the
Raspberry Pi. Start the tool with python2 distance_sensor_physical.py
to receive continuous measurements from the distance sensor. Change the pin
numbers for the trigger and echo pins in settings.json
if you have used
different pin numbers when connecting the HC-SR04 sensor to the Raspberry Pi.
We assume that you have setup a Raspberry Pi with Arch Linux ARM and
that you have connected the TSOP38238 sensor. Make sure that LIRC is setup
correctly on the device (refer to the docs
folder for more information on
this). This tool must be run on the Raspberry Pi. Start the tool with
python2 infrared_sensor.py
and use a Sony RM-SRB5 remote. Press the play
and stop buttons on the remote and verify that the callback functions are
triggered.
You can change the remote that you wish to use. To do so, create or download
the lircd.conf
file and place it in the control/remotes
folder. Then
create a lircrc
file using the same remote name there to bind the buttons
to the events. Finally change the remote name in the settings file.
You can use the planning problem to generate random sensor positions and
optimize them according to certain objectives, such as intersections at each
grid pixel in the sensor network, sensor distances and vehicle move distances.
You can start the planner in a terminal with python2 plan_reconstruct.py
, or
use the planning view. See its control panel section for more
details. The terminal-based planner supports exporting the resulting positions
in JSON format.
The control panel is a graphical user interface that can be run on the ground
station to provide status information and interfaces to tools. Run make
,
make control_panel
or python2 control_panel.py
in a terminal to open the
control panel.
The control panel consists of various views that provide different details and tools, but work in concert with each other. We list the various views below.
When starting the control panel, it starts a splash screen responsible for setting up components related to the RF sensors.
The loading view checks whether a physical RF sensor configured as a ground station sensor is connected through USB; otherwise, it waits for its insertion. If you do not have a physical RF sensor, then use the button to switch to the simulated version.
The devices view displays status information about the RF sensors in the network. It displays their numerical identifier, their category type, their address identifier and their joined status. The number of sensors is determined by a setting; adjust this setting in the settings view if necessary. If not all sensors are detected, ensure that the vehicles are completely started and use the Refresh button to discover them.
The planning view is an interface to the planning problem algorithm runner. It makes it possible generate random sensor positions and optimize them. The positions around the sensor network may be at continuous or grid-based discrete locations. The multiobjective optimization algorithm attempts to find feasible positioning solutions for which no other known solution is better in all objectives. You can tune the algorithm and problem parameters using the settings toolboxes.
It is possible to see the progress of the Pareto front, statistics and individual solutions during the run, so that you can see whether the run is going to be useful. Afterward, you can select a solution, whose sensor positions are sorted and assigned over the vehicles in such a way to decrease the total time needed for the mission.
The reconstruction view converts a dataset, dump or stream of signal strength measurements to input for the reconstructor, such as a weight matrix. The result of the reconstruction is a set of two-dimensional images. We provide multiple reconstruction algorithms:
- Singular value decomposition
- Truncated singular value decomposition
- Total variation minimization
- Maximum entropy minimization
The settings panels allow you to change reconstruction settings and start the reconstruction and visualization process. The raw data is shown in a graph and a table. The grid view indicates how well the measurements cover the grid cells. Streams can be recorded as a JSON dump for calibration or deferred analysis.
The waypoints view makes it possible to define a mission when the vehicles are
operating in the Mission_RF_Sensor
mission. You can add waypoints in each table
and optionally synchronize between vehicles at each waypoint. It is possible to
import and export JSON waypoints for later usage. The waypoints are sent to the
vehicles using custom packets.
The settings view is a human-friendly interface to the settings files. You can change all settings in this interface, sorted by component and with descriptions and a search filter. Validation checks ensure that the settings values are correct. The settings can be saved in override files on the ground station and also sent to the vehicles, selectable in the save dialog. If some vehicles are not selectable, return to the devices view to discover them.
The framework contains tests to ensure that all components behave the way we expect them to behave and therefore to reduce the risk of introducing regressions during development. The tests also include code coverage reports and other options for profiling and benchmarking. The tests have to be executed from the root folder using the following command:
$ make test
This command is executed automatically by Travis CI for each pull request or push to a branch.
Compatibility with the pylint
code style checker is provided to allow testing
whether the code follows a certain coding standard and contains no other
errors. Some reports may be disabled in .pylintrc
or through plugins. You can
use pylint mobile-radio-tomography
to scan all files, which is quite slow.
Travis CI automatically runs pylint on the Python files that were changed in
a commit range, an entire branch or a pull request.
During development, you can enable lint checks in your editor to receive code style help for the currently edited file on the go. For Vim, you can enable Syntastic or use an older pylint compiler script. See the pylint integration documentation for other editors.
The framework is licensed under a GPL v3 license. Refer to the LICENSE
file for more information.