Getting started with Khadas VIM3

Khadas VIM3 – a single board computer like a Raspberry Pi or Jetson Nano – but with more powerful processors.

Uses Amlogic A311D SoC and includes a neural processing unit for accelerating machine learning models and AI computations

Ideal for embedded computer vision or deep learning projects!

Installing Linux

The VIM3 comes with an Android OS pre-loaded which we need to replace for Linux. Khadas have good documentation for how to do this.

First setup a workspace for Khadas on the host machine (Ubuntu 20).

cd ~
mkdir Khadas
cd Khadas

Choosing OS Image

The first step is to choose an OS. Some decisions to make:

  • What kernel to use: can choose between the older more stable 4.9 kernel, or the latest (less stable?) mainline kernel.
  • Where to store the OS on the board: can install the image on an SD card and insert it into the slot, or, can flash the OS directly to the onboard storage. The onboard eMMC is faster and more stable than SD cards.
  • Whether to have command line only or install the desktop GUI: we wanted the full desktop as we will want to be able to use the GUI when out and about.

This case: use the VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.9-211217 image as we want the desktop Ubuntu installed in eMMC with 4.9 kernel.

Flashing OS Image

Make a separate directory for OS images and download the image

mkdir images
curl https://dl.khadas.com/Firmware/VIM3/Ubuntu/EMMC/VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.9-211217.img.xz --output images/VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.9-211217.img.xz

extract the the file using unxz.

unxz images/VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.9-211217.img.xz

To load the OS image onto the board we need the Khadas burning tool. Install the dependencies for disk tools first.

sudo apt-get install libusb-dev git parted

Then download the Khadas utils repository using git

git clone https://github.com/khadas/utils
cd utils

Then to setup the imaging tool:

sudo ./INSTALL

To flash the image and load the new OS, the board needs to be booted into upgrade mode.

  1. Connect the board to the host PC using the USB C connection.
  2. After its connected, wait for the board to boot up and its red LED to light up.
  3. Find the power button and reset button. Power is one of the 3 buttons, closest to the GPIO pins. Reset is one of the 3 buttons, closest to the USB port.
  4. While holding the power button, click the reset button, and continue to hold the power button for 3 seconds afterwards.
  5. The LED should blink and end up white.
  6. To confirm the board is in upgrade mode, type
lsusb | grep Amlogic

Should see something like:

BUS 002 Device 036: ID 1b8e:c003 Amlogic, Inc.

Otherwise, do the upgrade mode sequence again. When you can see the Amlogic listed like above, run the burn tool.

burn-tool -v aml -b VIM3 -i ../images/VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.9-211217.img

Should see output like:

dan@antec:~/Khadas/utils$ burn-tool -v aml -b VIM3 -i ../images/VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.9-211217.img
Try to burn Amlogic image...
Burning image '../images/VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.9-211217.img' for 'VIM3/VIM3L' to eMMC...
Rebooting the board ........[OK]
Unpacking image [OK]
Initializing ddr ........[OK]
Running u-boot ........[OK]
Create partitions [OK]
Writing device tree [OK]
Writing bootloader [OK]
Wiping  data partition [OK]
Wiping  cache partition [OK]
Writing logo partition [OK]
Writing rootfs partition [OK]
Resetting board [OK]
Time elapsed: 8 minute(s).
Done!
Sat 19 Feb 17:45:01 GMT 2022

Running object detection on the NPU

Follow the Tengine docs

Compiling OpenPose for GPU and Python

First install CUDA 10.1. Go to the releases page for NVIDIA and download the installer for Cuda 10.1, Windows 10.

After the installer has finished, download CUDNN 7.5 from the NVIDIA archive and install into cuda 10.1. To install, extract the CUDNN download and then copy the CUDNN bin file to the CUDA 10.1 bin directory, the CUDNN include file into the CUDA include directory, and the CUDNN lib file into the CUDA lib directory.

Now open a PowerShell in the C: drive or wherever we want to install OpenPose and enter:

git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose
cd openpose

There are a few submodules in the OpenPose code that do not download automatically (like pybind, clang and caffe modules). We need to issue an extra command to download the submodules.

git submodule update --init --recursive

To configure and compile OpenPose use CMake (I used 3.17), so download and install that; and we also need the Visual Studio 2017 compiler tools so download and install that as well. If you already have Visual Studio installed you should be able to use the Visual Studio installer to install 2017 tool chains. Also for this to work with Python we need to make sure that our Python interpreter is installed, I have Anaconda installed, CMake should find it automatically.

CMake is used to configure and prepare the software for our system before it is run through a compiler and turned into executables. Open CMake and click “browse source” button to set the location of the OpenPose code. E.g. select E:\OpenPoseDemo\openpose. Then click “browse build” to set the specify where to save the compiled code. Standard practice is to save the compiled executables to a folder called “build” inside the root directory of the OpenPose code. I.e. make a new folder in the E:\OpenPoseDemo\openpose directory called “build”, then select E:\OpenPoseDemo\openpose\build as the location to build the binaries.

Screenshot (78)

Click the configure button and select VS 2017 as our generator, x64 as the platform, leave the last entry blank and use default native compilers. The CMake program will use the make files in the repo to configure and download any extra dependencies and files.

When the configuration is done CMake should be populated with parameters of the build. We want to build the Python API so find the BUILD_PYTHON option and enable it by clicking the tickbox. I also wanted the COCO 18 point estimation model so I checked the DOWNLOAD_BODY_COCO_MODEL option. Then click Generate to prepare the software for compilation. It will take a while to download the extra models as well.

Screenshot (80)

Assuming there were no errors, we should be able to click the Open Project button which will open up Visual Studio with the configured OpenPose code for compilation. Next to the green play button use the drop down to set to “release” mode and “x64” mode then in the tool bar click build -> build solution.

Screenshot (81)

Assuming that has run without errors, we can now test the code. Back in the PowerShell, change into the build/examples/tutorial_api_python directory and run an example and hopefully see some positive output.

cd \build\examples\tutorial_api_python
python 01_body_from_image.py
Screenshot (82)