Custom Petalinux platform for AI (Tutorial)
Basic info
Xilinx provides a library for working with machine learning models called Vitis-AI. It includes, but is not limited to, container for development of machine learning models using Tensorflow and Pytorch, tools to export and run the models on FPGA hardware. To accelerate machine learning processing, custom co-processor cores called DPU are implemented on FPGA and communicated with the system via Xilinx’s libraries. Usually development of ML models is decoupled from final DPU implementation, but the limitations of DPU hardware have to be considered during the model design stage.
Most of the sources are provided in the GIANT github repository: https://github.com/Xilinx/Vitis-AI. Make sure to always check out the appropriate version. Some big changes in the structure of repository was done in switch from v1.4.1 to v.2.0.
As mentioned before, the whole project structure involves multiple steps:
- designing, training, evaluating and exporting a machine learning model
- creating system hardware platform
- building Petalinux image and exporting project configuration
- building DPU implementation, including it in the project bitstream and exporting final SD card image
First step will be omitted from this article. There is already a big repository of pre-built images in Vitis-Ai model zoo: https://github.com/Xilinx/Vitis-AI/tree/master/model_zoo.
These models are compatible with DPU architecture and are advised to use for testing and verification. There will be an example towards the end of using such a model from the model zoo.
For more information on configuring Petalinux image, read the previous article: Building custom linux images with petalinux.
Since the project in the aforementioned article relies on a BSP from Xilinx that does not include a DPU compatible hardware platform, it cannot be used. You can redo the configuration in the second step of the proces
Requirements
Build steps
TODO: To following steps are taken from the Xilinx’s step by step tutorial, which can be found here. I copied the provided instructions and added a few modifications, which make editing the Petalinux step easier.
Taking a look into the reposity directory under ref_files
will present a top level Makefile
which can be used to build the entire project at once or each step at a time. Same holds true for each of the steps. Since the steps rely on data from the previous one it is mandatory to complete the previous step before executing the next one. All the steps use tcl scripts and Makefiles to automate the process, but the readmes provide instructions for manual instructions as well.
Step1: Creating hardware platform
In this step a minimal Vivado project is created for the platform using a tcl script. Here the basic processing block is connected to AXI and clock block along with other IO.
The result out this step is a hardware description file, ending with .xsa
. This file is needed for the next step.
Step2: Creating Petalinux project
This step creates a petalinux project using the .xsa
file generate in the previous step. Should the hardware platform change in the future, new .xsa
file should be retrieved and the project rebuilt with it.
Since this step is more or less understood all the way I’ll spend more time on certain parts of the process (from the point of the Makefile provided).
Configuration
The development process for Petalinux involves mostly ticking boxes and writing configuration strings in terminal based GUIs. This can be time consuming and frustrating whenever you want to create a new project with known configuration with lots of options set over different GUIs. In case of our project it is important to provide easy and accurate instructions so that any team member can build the same image. To avoid navigating the GUI extravaganza I checked out the behaviour of the tools.
Petalinux GUI tools save the current state of the settings to several locations depending on the configurator:
petalinux-config
sets system and build configuration and saves the settings to:<project folder>/project-spec/configs/config
petalinux-config -c rootfs
reads additional package list from<project folder>/project-spec/meta-user/conf/user-rootfsconfig
, adds packages to target filesystem and saves the settings to:<project folder>/project-spec/configs/rootfs-config
petalinux-config -c kernel
sets kernel settings, device drivers, etc. and and exports the settings to a new*.cfg
file, which is save to :<project folder>/
project-spec/meta-user/recipes-kernel/linux/linux-xlnx
Since the underlying build system is Yocto, Petalinux translates its own configuration data to that of Yocto in the <project folder/build/tmp
. This happens at the end, after the configuration data is exported and saved to <project folder/...
.
The key is to copy custom configuration data to those files and directories and runnign the configuration in the background without GUI. This can be achived with option --silentconfig
.
Petalinux also creates copies of the config files (fallback) after each invocation, so it is important to delete those when running the configurator to prevent the tool from copying any old/default options (speculating).
Another lesson obtained the hard way is that adding Yocto layers to the configuration (using petalinux-config
menu) before first invocation, results in a configuration error. Therefore, any additional layers, in our cas that’s meta-ros
, are added to the configuration file after first invocation (using --silentconfig
).
Yet another shocker is the way disabled options and configurations are determined. When a new project is created, some settings are applied as default. By disabling such default settings (disable default ssh provider dropbear, use openssh), Petalinux does not remove them from the config files but instead changes the line:
# Enabled: CONFIG_imagefeature-ssh-server-dropbear=y # Disabled: # CONFIG_imagefeature-ssh-server-dropbear is not set |
Without the line indicating, that the option is disabled (default enabled), it is NOT disabled from the configuration. Therefore configuration lines that add, enable or disable options and packages are all added to the config files. To obtain these files I recommend configuring the whole system using GUI and then saving the config files. You can then use these files to overwrite generated ones during for the non-GUI build. Again, don’t forget, that adding layers before first configuration, will fail the process. Add them after.
Other variables and configuration options, such as those that have to be added to the build
directory (no alternative to add the in the config file), also have to be added after at least one invocation of configurator. Some of these settings are applied by finding and replacing default options in the files themselves.
These steps are implemented as described in step2
Makefile and it goes as something like this:
.xsa
file from step1 is copied tho the root of step2 directory- A new petalinux project is created inside
<step2 folder>/build
- petalinux config (without new layers) is added to the
<project folder>/project-spec/configs/config
, local downloads and sstate cache paths are added/replaced with default placeholders (depends on where you have the saved on your system). - project is processed and configured using the copied
.xsa
file. The tool renames and stores the file in<project folder>/project-spec/hw-description/system.xsa
. - rootfs configuration is added to
<project folder/project-spec/configs/rootfs-config
and custom recipes are copied over to<project folder>/project-spec/meta-user
. - custom device tree entry config (for sd-card compatability) is added by copying system-user.dtsi to
<project folder>/project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi
. - Any old config files are deleted and the project is re-configured.
meta-ros
respository is cloned, checked-out at the correct revision. User layers for ROS are added to the<project folder/project-spec/config/config
and some build time variables (required by ROS layers) are added to the<project folder>/build/conf/bblayers.conf
.- ssh keys are not shipped with the project for security, so the files are found by adding two variables to build config file:
<project folder>/build/conf/local.conf
. Recipe, which installs the keys will look for the files using the path from the variables. - Any old config files are deleted and the project is re-configured for the last time
I’ll mention that the way I added the configuration to the project without using the “mandatory” GUI tools was inspired by Xilinx’s tutorial. There they add options and configurations the the project by appending options line by line and running the configurator.
Build
Then the project and sdk build is called. This might take quite a while. If you don’t use any pre-configured downloads and/or sstate cache directory, then this might take half a day to complete. More or pre-fetched downloads and sstate cache in the previous article.
Once the build is finished, the output is packaged into “flashable” formats.
Step3: Creating Vitis platform
This step mostly involves copying various build files from the previous one, such as kernel, bootloader, filesystem image, sdk.sh, etc. These files are copied in to local directories boot
sd_dir
and sw_comp
, Vitis platform is created using previously generated XSA file and the sysroot sdk.sh is run (extracted). Platform information is used to relay information for the next step regarding system information to build the DPU cores and add them to the image.
Step4: Building and integrating DPUs into final image
In this step, Vitis-AI library is cloned and checked-out at version v1.4.1 which is compatible with xilinx tools v2021.2. Once copleted, Vitis build script is executed from the repository (`Vitis-AI/dsa/DPU-TRD/prj/Vitis/Makefile`) which builds the DPU core with the specification for the target (`Vitis-AI/dsa/DPU-TRD/prj/Vitis/dpu_conf.vh`) and (`Vitis-AI/dsa/DPU-TRD/prj/Vitis/config_file/prj_config_104_2dpu`), This will implement two DPU cores in the system.
To run the build, just run make
inside the step4 folder. If you’re running in a container just as me, Vitis will fail the DPU build down the line after about 20 minutes of work. Add the following variable before the call: LD_PRELOAD=/lib/x86_64-linux-gnu/libudev.so.1 make.
Once the build is completed, results are located in: <step4 folder>/Vitis-AI/dsa/DPU-TRD/prj/Vitis/binary_container_1/sd_card.img
. Use a flashing tool to flash the image to an SD card. On linux you can use sudo cp sd_card.img /dev/sda
to flash the SD card with no extra tools required.