Check out the latest updates! View News & events

Cosylab Logo Cosylab Logo
  • Solutions
    • Radiation therapy
      Enable the best cancer care, streamline workflows, treat more patients and reduce your development risks and time-to-market with our innovative, integrable software.
      Read more
    • Fusion
      Fusion projects are dynamic environments, and success is measured in milestones. Our experts in control, prototyping, diagnostics and subsystems development can help your project stay on track.
      Read more
    • Quantum
      Bring your quantum system to the market sooner with our control system components and integration while focusing on core innovation! Industrial quality and dependability hardware and firmware are our business.
      Read more
    • Accelerators
      With decades of experience in control systems for all particle accelerator types, we can help you mitigate development risk, shorten delivery time and reduce the total cost of ownership.
      Read more
    • Complex medical devices
      Leverage our vast engineering expertise in developing complex medical software to bring your innovative device to the market and patients sooner.
      Read more
    • Semiconductor
      Gain some breathing space while shortening development cycles with our advanced software and electronics engineering solutions
      Read more
    • Space
      Let us help you develop top-class software systems for your scalable space missions faster, reducing risk and time-to-market in a highly regulated environment.
      Read more
    • Astronomy
      Astronomy projects are increasing in cost and complexity while timelines are shortening. You can count on us to provide well-designed, standards-based software to reduce your project's risk.
      Read more
  • Customer stories
  • Expertise
  • About us
  • Careers
  • Blog
  • News & events
Get in touch
  • sl
  • en
  • zh
  • ja
Get in touch
  • sl
  • en
  • zh
  • ja

Solutions

(121 results)

Search Result Image
Space
Bring your space mission to life with expert engineering force
Search Result Image
Expertise
Bring your space mission to life with expert engineering force
Search Result Image
Some space solution
Bring your space mission to life with expert engineering force

Articles

(21 results)

Search Result Image
Article about space
Bring your space mission to life with expert engineering force
Search Result Image
Article about space
Bring your space mission to life with expert engineering force
Search Result Image
Article about space
Bring your space mission to life with expert engineering force
Search Result Image
Article about space
Bring your space mission to life with expert engineering force
Search Result Image
Article about space
Bring your space mission to life with expert engineering force

Content

(21 results)

Search Result Image
Content about space
Bring your space mission to life with expert engineering force
View all results
  1. Homepage
  2. Custom Petalinux platform for AI (Tutorial)

Custom Petalinux platform for AI (Tutorial)

Publish date:
9. December 2024
Category:
Technology
Author:
Uros Legat
This article outlines the process of creating a Petalinux image for ZCU104 development board, that is capable of processing machine learning models on dedicated hardware implemented on FPGA.
Custom Petalinux platform for AI (Tutorial)
Share:
  • Facebook
  • Instagram
  • Twitter

Basic info

Xilinx provides a library for working with machine learning models called Vitis-AI. It includes, but is not limited to, container for development of machine learning models using Tensorflow and Pytorch, tools to export and run the models on FPGA hardware. To accelerate machine learning processing, custom co-processor cores called DPU are implemented on FPGA and communicated with the system via Xilinx’s libraries. Usually development of ML models is decoupled from final DPU implementation, but the limitations of DPU hardware have to be considered during the model design stage.

Most of the sources are provided in the GIANT github repository: https://github.com/Xilinx/Vitis-AI. Make sure to always check out the appropriate version. Some big changes in the structure of repository was done in switch from v1.4.1 to v.2.0.

As mentioned before, the whole project structure involves multiple steps:

  • designing, training, evaluating and exporting a machine learning model
  • creating system hardware platform
  • building Petalinux image and exporting project configuration
  • building DPU implementation, including it in the project bitstream and exporting final SD card image

First step will be omitted from this article. There is already a big repository of pre-built images in Vitis-Ai model zoo: https://github.com/Xilinx/Vitis-AI/tree/master/model_zoo.
These models are compatible with DPU architecture and are advised to use for testing and verification. There will be an example towards the end of using such a model from the model zoo.

For more information on configuring Petalinux image, read the previous article: Building custom linux images with petalinux.
Since the project in the aforementioned article relies on a BSP from Xilinx that does not include a DPU compatible hardware platform, it cannot be used. You can redo the configuration in the second step of the proces

Requirements

This time you will need to install Vivado, Vitis and Petalinux. All packages have to match in version number. In the previous article, version 2021.2 was chosen for the Petalinux, which holds true for Vivado and Vitis as well. Vitis-AI library, that is compatible with this version, is v1.4.1. This is important when checking out the github repository.Since Vivado/Vitis tools need user attention during install, I recommend running a ubuntu container (provided) for working on your project. After the container is built, mount your project folder and your programs folder and install Vitis/Vivado on your disk via container. Vivado/Vitis use a graphics installer which should be easy to navigate. Make sure to only select architectures that you need or you’ll be wasting lots of space and taking more install time.

Build steps

TODO: To following steps are taken from the Xilinx’s step by step tutorial, which can be found here. I copied the provided instructions and added a few modifications, which make editing the Petalinux step easier.

Taking a look into the reposity directory under ref_files will present a top level Makefile which can be used to build the entire project at once or each step at a time. Same holds true for each of the steps. Since the steps rely on data from the previous one it is mandatory to complete the previous step before executing the next one. All the steps use tcl scripts and Makefiles to automate the process, but the readmes provide instructions for manual instructions as well.

Step1: Creating hardware platform

In this step a minimal Vivado project is created for the platform using a tcl script. Here the basic processing block is connected to AXI and clock block along with other IO.

The result out this step is a hardware description file, ending with .xsa . This file is needed for the next step.

Step2: Creating Petalinux project

This step creates a petalinux project using the .xsa file generate in the previous step. Should the hardware platform change in the future, new .xsa  file should be retrieved and the project rebuilt with it.

Since this step is more or less understood all the way I’ll spend more time on certain parts of the process (from the point of the Makefile provided).

Configuration

The development process for Petalinux involves mostly ticking boxes and writing configuration strings in terminal based GUIs. This can be time consuming and frustrating whenever you want to create a new project with known configuration with lots of options set over different GUIs. In case of our project it is important to provide easy and accurate instructions so that any team member can build the same image. To avoid navigating the GUI extravaganza I checked out the behaviour of the tools.

Petalinux GUI tools save the current state of the settings to several locations depending on the configurator:

  • petalinux-config sets system and build configuration and saves the settings to: <project folder>/project-spec/configs/config
  • petalinux-config -c rootfs reads additional package list from <project folder>/project-spec/meta-user/conf/user-rootfsconfig, adds packages to target filesystem and saves the settings to: <project folder>/project-spec/configs/rootfs-config
  • petalinux-config -c kernel sets kernel settings, device drivers, etc. and and exports the settings to a new *.cfg  file, which is save to : <project folder>/project-spec/meta-user/recipes-kernel/linux/linux-xlnx

Since the underlying build system is Yocto, Petalinux translates its own configuration data to that of Yocto in the <project folder/build/tmp . This happens at the end, after the configuration data is exported and saved to <project folder/... .
The key is to copy custom configuration data to those files and directories and runnign the configuration in the background without GUI. This can be achived with option --silentconfig .

Petalinux also creates copies of the config files (fallback) after each invocation, so it is important to delete those when running the configurator to prevent the tool from copying any old/default options (speculating).

Another lesson obtained the hard way is that adding Yocto layers to the configuration (using petalinux-config menu) before first invocation, results in a configuration error. Therefore, any additional layers, in our cas that’s meta-ros , are added to the configuration file after first invocation (using --silentconfig ).

Yet another shocker is the way disabled options and configurations are determined. When a new project is created, some settings are applied as default. By disabling such default settings (disable default ssh provider dropbear, use openssh), Petalinux does not remove them from the config files but instead changes the line:

# Enabled:
CONFIG_imagefeature-ssh-server-dropbear=y
# Disabled:
# CONFIG_imagefeature-ssh-server-dropbear is not set

 

Without the line indicating, that the option is disabled (default enabled), it is NOT disabled from the configuration. Therefore configuration lines that add, enable or disable options and packages are all added to the config files. To obtain these files I recommend configuring the whole system using GUI and then saving the config files. You can then use these files to overwrite generated ones during for the non-GUI build. Again, don’t forget, that adding layers before first configuration, will fail the process. Add them after.

Other variables and configuration options, such as those that have to be added to the build directory (no alternative to add the in the config file), also have to be added after at least one invocation of configurator. Some of these settings are applied by finding and replacing default options in the files themselves.

These steps are implemented as described in step2  Makefile and it goes as something like this:

  • .xsa  file from step1 is copied tho the root of step2 directory
  • A new petalinux project is created inside <step2 folder>/build
  • petalinux config (without new layers) is added to the <project folder>/project-spec/configs/config, local downloads and sstate cache paths are added/replaced with default placeholders (depends on where you have the saved on your system).
  • project is processed and configured using the copied .xsa file. The tool renames and stores the file in <project folder>/project-spec/hw-description/system.xsa .
  • rootfs configuration is added to <project folder/project-spec/configs/rootfs-config and custom recipes are copied over to <project folder>/project-spec/meta-user .
  • custom device tree entry config (for sd-card compatability) is added by copying system-user.dtsi to <project folder>/project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi.
  • Any old config files are deleted and the project is re-configured.
  • meta-ros respository is cloned, checked-out at the correct revision. User layers for ROS are added to the <project folder/project-spec/config/config and some build time variables (required by ROS layers) are added to the <project folder>/build/conf/bblayers.conf .
  • ssh keys are not shipped with the project for security, so the files are found by adding two variables to build config file: <project folder>/build/conf/local.conf . Recipe, which installs the keys will look for the files using the path from the variables.
  • Any old config files are deleted and the project is re-configured for the last time

I’ll mention that the way I added the configuration to the project without using the “mandatory” GUI tools was inspired by Xilinx’s tutorial. There they add options and configurations the the project by appending options line by line and running the configurator.

Build

Then the project and sdk build is called. This might take quite a while. If you don’t use any pre-configured downloads and/or sstate cache directory, then this might take half a day to complete. More or pre-fetched downloads and sstate cache in the previous article.
Once the build is finished, the output is packaged into “flashable” formats.

Step3: Creating Vitis platform

This step mostly involves copying various build files from the previous one, such as kernel, bootloader, filesystem image, sdk.sh, etc. These files are copied in to local directories bootsd_dir and sw_comp , Vitis platform is created using previously generated XSA file and the sysroot sdk.sh is run (extracted). Platform information is used to relay information for the next step regarding system information to build the DPU cores and add them to the image.

Step4: Building and integrating DPUs into final image

In this step, Vitis-AI library is cloned and checked-out at version v1.4.1 which is compatible with xilinx tools v2021.2. Once copleted, Vitis build script is executed from the repository (`Vitis-AI/dsa/DPU-TRD/prj/Vitis/Makefile`) which builds the DPU core with the specification for the target (`Vitis-AI/dsa/DPU-TRD/prj/Vitis/dpu_conf.vh`) and (`Vitis-AI/dsa/DPU-TRD/prj/Vitis/config_file/prj_config_104_2dpu`), This will implement two DPU cores in the system.

To run the build, just run make inside the step4 folder. If you’re running in a container just as me, Vitis will fail the DPU build down the line after about 20 minutes of work. Add the following variable before the call: LD_PRELOAD=/lib/x86_64-linux-gnu/libudev.so.1 make.

Once the build is completed, results are located in: <step4 folder>/Vitis-AI/dsa/DPU-TRD/prj/Vitis/binary_container_1/sd_card.img . Use a flashing tool to flash the image to an SD card. On linux you can use sudo cp sd_card.img /dev/sda to flash the SD card with no extra tools required.

Back

The leading provider of cutting-edge expertise, software and electronics for the world’s most advanced systems and devices.

Our expertise
  • Expertise
Solutions
  • Radiation therapy
  • Complex medical solutions
  • Quantum
  • Accelerators
  • Fusion
  • Space
  • Astronomy
Media
  • Blog
  • News & events
About
  • Contact
  • About us
  • Careers
  • linkedin
  • facebook
  • instagram
  • twitter
  • Privacy policy

© 2025 Cosylab. All rights reserved.

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services.

This website uses cookies

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services. Check our privacy policy

Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Always active
Statistics
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.