-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Running OSRM
Instructions for compiling OSRM can be found [here](Building OSRM).
Instructions for compiling and running OSRM on a Amazon EC2 Micro Instance [here](Building and Running OSRM on EC2).
Getting a city up and running in 4 steps:
Note: You must select a profile before running osrm-extract
. You can select the car profile by running ln -s ../profiles/car.lua profile.lua
wget http://download.geofabrik.de/europe/germany/bremen-latest.osm.pbf
./osrm-extract bremen-latest.osm.pbf
./osrm-prepare bremen-latest.osrm
./osrm-routed bremen-latest.osrm
Exported OSM data files can be obtained from providers such as Geofabrik. OSM data comes in a variety of formats including XML and PBF, and contain a plethora of data. The data includes information which is irrelevant to routing, such as positions of public waste baskets. Also, the data does not conform to a hard standard and important information can be described in various ways. Thus it is necessary to extract the routing data into a normalized format. This is done by the OSRM tool named extractor. It parses the contents of the exported OSM file and writes out a file with the suffix .osrm which contains the routing data, a file with the suffix .osrm.restrictions which contains restrictions to make certain turns during navigation, and, a file with the suffix .osrm.names which contains the names of all the roads.
Profiles are used during this process to determine what can be routed along, and what cannot (private roads, ...). In order to get a proper profile.lua, make a symbolic link being inside your build directory:
ln -s ../profiles/car.lua profile.lua
It might be also necessary to add a symlink to the profile-lib directory inside your build directory:
ln -s ../profiles/lib/
Suppose that you download an OSM data file with the name map.osm (an XML file with an .osm extension). You would extract this file using OSRM with the following command:
./osrm-extract map.osm
Extracting a file which contains data pertaining to the entire planet will take a few hours, mostly depending on the speed of the harddisks. On a Core i7 with 8GB RAM and (slow) 5400 RPM Samsung SATA hard disks it took about 65 minutes to do so from a PBF formatted planet. Please note that your mileage may vary and the faster your disks and processor the faster it will be. SSDs are certainly an advantage here. Most of the data is kept on disk, because RAM is scarce and extracting the data of a planet file will take up dozens of gigabytes of RAM. Currently about 35 GB or so. The tool is also able to handle bzip2 compressed files as well as PBF files: ./osrm-extract map.osm.bz2
and ./osrm-extract map.osm.pbf
will work fine. PBF is the generally the better choice. Note that preprocessing the planet, i.e. the next step, is much more resource-hungry.
External memory accesses are handled by the stxxl library. Although you can run the code without any special configuration you might see a warning similar to [STXXL-ERRMSG] Warning: no config file found
. Given you have enough free disk space, you can happily ignore the warning or create a config file called .stxxl
in the same directory where the extractor tool sits. The following is taken from the stxxl manual:
You must define the disk configuration for an STXXL program in a file named '.stxxl' that must reside in the same directory where you execute the program. You can change the default file name for the configuration file by setting the enviroment variable STXXLCFG .
Each line of the configuration file describes a disk. A disk description uses the following format:
disk=full_disk_filename,capacity,access_method
-
full_disk_filename : full disk filename. In order to access disks STXXL uses file access methods. Each disk is respresented as a file. If you have a disk that is mapped in unix to the path /mnt/disk0/, then the correct value for the full_disk_filename would be /mnt/disk0/some_file_name ,
-
capacity : maximum capacity of the disk in megabytes
-
access_method : STXXL has a number of different file access implementations, choose one of them:
-
syscall uses read and write system calls which perform disk transfers directly on user memory pages without superfluous copy (currently the fastest method)
-
mmap : performs disks transfers using mmap and munmap system calls
-
simdisk : simulates timings of the IBM IC35L080AVVA07 disk, full_disk_filename must point to a file on a RAM disk partition with sufficient space
An example config file looks like this: disk=/tmp/stxxl,25000,syscall
It is generally a good idea to have the planet file on a separate partition since this avoids a large number of concurrent (read: slow, thanks Dennis S.) read/write disk accesses.
The so-called Hierarchy is a lot of precomputed data, that enables the routing engine to find shortest path within short time. It is created by the the command-line
./osrm-prepare map.osrm
where map.osrm is the extracted road network and map.osrm.restrictions is a set of turn restrictions. Both are generated by the previous step. A nearest-neighbor data structure and a node map are created alongside the hierarchy. Once computation has finished, there should be another four files: map.osrm.hsgr (the Hierarchy), map.osrm.nodes (the nodemap), map.osrm.ramIndex (stage 1 index), map.osrm.fileIndex (stage 2 index).
See also the config files page for additional parameters to configure the extractor.
If you run the command with the prefix of all the .osrm.*
files
./osrm-routed map.osrm
it will pick up all dependent files - assuming they are in the same directory. Please note the actual .osrm
file is only a itermediate file and is not needed for running the server.
You can access the API on localhost:5000
. See the server API for details on how to use it.
If you run
./osrm-routed --help
it will display all command-line options to manually specify any files.
We just released OSRM v0.3.7 with huge improvements for running OSRM in a high-availability production environment. With all these changes, you should load all the shared memory directly into your RAM. It's as easy as:
./osrm-datastore /path/to/data.osrm
If there is insufficient available RAM (or not enough space configured), you will receive the following warning when loading data with osrm-datastore:
[warning] could not lock shared memory to RAM
In this case, data will be swapped to a cache on disk, and you will still be able to run queries. But note that caching comes at the price of disk latency. Again, consult the Wiki for instructions on how to configure your production environment. Starting the routing process and pointing it to shared memory is also very, very easy:
./osrm-routed --shared-memory=yes
See this for more information.
OSRM comes with a number of tests using the cucumber framework. Further information on how to run the tests can be found here.
File type | Producer | Consumer | Description |
---|---|---|---|
.osrm |
osrm-extract |
osrm-prepare |
Original filtered graph data (nodes and edges) |
.restrictions |
osrm-extract |
osrm-prepare |
Intermediate representation of turn restrictions |
.names |
osrm-extract |
osrm-routed |
Street names and index |
.nodes |
osrm-prepare |
osrm-routed |
Original graph nodes (compressed geometry) |
.edges |
osrm-prepare |
osrm-routed |
Original graph edges (compressed geometry) |
.geometry |
osrm-prepare |
osrm-routed |
Geometry that was removed from the original graph |
.hsgr |
osrm-prepare |
osrm-routed |
Contraced edge-expanded graph (nodes and edges) |
.ramIndex |
osrm-prepare |
osrm-routed |
Index of the R-Tree to to segment lookups |
.fileIndex |
osrm-prepare |
osrm-routed |
Leaves of the R-Tree that are loaded on-demand to memory |
.core |
osrm-prepare |
osrm-routed |
Indicates which nodes of the graph have been contracted |
Only the files consumed by osrm-routed
need to be conserved after preprocessing.
I think this page is obsolete. I add few commet since I tried to compile OSMR recently using Codeblocks 20.03 on Windows 10. What I found was that
Several extra libraries ave to be installed like BZip2, lua. I was using MSYS2 so I did, pacman -S mingw-w64-x86_64-bzip2 pacman -S mingw-w64-x86_64-lua pacman -S mingw-w64-x86_64-zlib
instal Intell tbb https://github.com/oneapi-src/oneTBB/releases Extract it to a folder (e.g., C:\tbb).
Ensure to add TBB Path to CMake Command and that the bin folder of MinGW or MSYS2 (e.g., C:\msys64\mingw64\bin) is added to your system PATH environment variable.
Then in osrm-backend\build run
cmake -G "CodeBlocks - MinGW Makefiles" .. -DCMAKE_BUILD_TYPE=Release -DTBB_INCLUDE_DIR="C:/tbb/include" -DTBB_LIBRARY="C:/tbb/lib/intel64/gcc4.8/libtbb.so"
cmake -G "CodeBlocks - MinGW Makefiles" .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-fno-lto -mconsole" -DCMAKE_EXE_LINKER_FLAGS="-Wl,-e,mainCRTStartup"
It may be enough to compile.
However for some compilation problems I add to do
- remove "-Werror # Treat all warnings like error" in a CMakeLists.txt file
- add in shared_memory.hcp in line 208: (void)lock_file; // This explicitly marks lock_file as used to avoid warning of unused variable
- to avoid an Link Time Optimization (LTO) error run cmake -G "CodeBlocks - MinGW Makefiles" .. -DCMAKE_BUILD_TYPE=Release -DIPO=OFF
- put OFF in option(ENABLE_LTO "Use Link Time Optimisation" OFF) and I add set(CMAKE_INTERPROCEDURAL_OPTIMIZATION OFF) in Cmake
I finally gave up because of Windows console incompatibility (Winmain not found) without knowing the reason even after having, In codebleocks Project properties, Built target, type put Console application