This repository serves as an aggregator for 3 conceptually different projects, but tightly bound to each other:
- A containerized cross-compiler for TriCore architecture
- A flasher for AURIX development boards (WIP)
- A utility Python package for interacting with both of them (mimicking the Platform IO workflow: build, flash, monitor)
Both the cross-compiler container and the Python package are published "pre-coocked" on an online repository:
Make sure that Python3 and a Docker Engine are installed on your system and run:
docker pull francescomecatti/tricore-dev-env:1.0
pip3 install tricore-cli
You shold now be able to run:
docker run -it CONTAINER_TAG
And, in case it exits:
docker start -ai CONTAINER_ID
to restore it.
This is exactly what tricorecli SUBCMD DIR
runs under the hood (through the Docker SDK for Python). To mount a host's directory into the container, a bind mount is used by the script.
That's all! Move to Examples - Containerized installation for the build process.
First of all, clone this repository:
git clone --recurse-submodules --shallow-submodules https://github.com/mc-cat-tty/tricore-dev-env
In order to install the toolchain on your system, the following steps are required:
- Build binutils for TC target
- Build GCC for TC traget
- Build, at least, libgcc for TC target
- Build newlib for TC target
Apart from the third point, this sequence must be kept in order, since bintuils (as
, ar
, ld
, etc.) are needed by gcc
; and, in turn, newlib
requires a fully-functional gcc
to be bult (plus the linker and the assembler).
The build procedure is automated by build-toolchain/install.sh script. Feel free to tinker around with the build script. The proposed set of flags make the compilation successfull on both Darwin/arm64 and Linux/amd64 platforms.
The install directory of the script can be customized with INSTALL_PATH
environment variable:
INSTALL_PATH=/home/foo bash build-toolchain/install.sh
In case of missing nproc
in your system, replace JOBS
variable with a reasonable value; typically 1.5 times the number of cores on your machine.
First of all, clone this repository:
git clone --recurse-submodules --shallow-submodules https://github.com/mc-cat-tty/tricore-dev-env
Make sure that a Docker Engine is installed and running on your system:
cd tricore-dev-env
docker build -t CONTAINER_TAG .
The build
subcommand requires a Dockerfile to be present in the build directory. This file instructs the docker client about the building steps.
You might want to avoid the copy of some source files during the build process - for instance, because they are not ready to be embedded into the container; .dockerignore serves exactly this purpose.
Both the aforementioned files are located in the top-directory of this project, so that you can build up your own on top of them.
You shold now be able to run:
docker run -it CONTAINER_TAG
And, in case it exits:
docker start -ai CONTAINER_ID
to restore it.
Please, note that the linker script .lsl file is not the default one provided by Infineon Code Examples. It has been downloaded from this URL, as suggested by AURIX Development Studio - Guide for HighTec Toolchains.
Make sure that the dependencies described here are installed - and running, in the case of the Docker Engine - on your system.
Pick and example, for instance Blinky_LED_1_KIT_TC397_TFT:
cd examples/Blinky_LED_1_KIT_TC397_TFT
tricorecli build .
If the build process is successful, a new build directory should appear in the project's top folder. Inside build you can find two compilation artifacts:
- Blinky
- Blinky.hex
Blinky.hex is the format expected by Infineon MemTool.
If you have not already exported binutils and gcc paths into PATH env var, do the following:
export PATH=/your/path/binutils-tc/install/bin/:$PATH
export PATH=your/path/gcc-tc/install/bin/:$PATH
Be aware that your path may be different. It depends on the configuration of your environment.
cd examples/Blinky_LED_1_KIT_TC397_TFT
mkdir build
cd build
cmake --toolchain tricore_toolchain.cmake .. && make -j12
12 is the number of jobs I decided to pass to make
; find your own tuning. Usually 1.5 * $(nproc)
.
You should now see the result of the compilation, namely Blinky - for this specific project -, in the build folder: file Blinky
. If file
spits out something like ELF 32-bit [...] Siemens Tricore Embedded Processor
, everything should be ok.
Now let's convert the ELF to the HEX format (already done by the build system, in the latest version of the project): tricore-elf-objcopy -j .text -j .data -O ihex Blinky Blinky.hex
.
The CMakeLists.txt and tricore_toolchain.cmake do the trick. Noteworthy directives are:
set(CMAKE_C_COMPILER tricore-elf-gcc)
chooses the right cross-compiler. Omitting this leads to the use of the system's compiler, which is not - almost for sure - a compier that supports TriCore architecture as a target.set(CMAKE_SYSROOT /tricore-build-tools/tricore-elf)
sets the compiler sysroot, namely the path where libraries (both heaers, under /include and static/dynamic libraries, under /lib) likelibc
,libm
,crt
(C RunTime), etc. are searched. Read more about it on the GCC manual.project(... LANGUAGES C)
disables C++ language. Enabling C++ raises some errors at the moment.add_compile_definitions(__HIGHTEC__)
defines a macro required by iLLD libraires.add_compiler_options(...) and add_link_options(...)
are described here.-mcpu=XXXX
must be coherent with the CPU of your board. Runtricore-elf-gcc --target-help
to get the complete list of supported CPUs and architectures.include_directories(... /tricore-build-tools/tricore-elf/include)
includes the header files of newlib.
Note that set(CMAKE_SYSROOT /tricore-build-tools/tricore-elf)
and include_directories(... /tricore-build-tools/tricore-elf/include)
are probably useless in the context of a single installation path for GCC, binutils and newlib; but, they are necessary if the installation paths of these tools differ.
In particular, they should point to the install directory of newlib.
At the moment, projects have to be created by hand.
Some useful resources are: