yolov5 trains and generates rknn model and 3588 platform deployment

Hits: 0

1. Server environment configuration

1.1 GPU driver installation

Download GPU driver

https://www.nvidia.cn/geforce/drivers/

Select the corresponding graphics card model and operating system, and click Search

Select the latest download and install (all options are best by default)

Enter the command in the terminal to see if the installation is correct

nvidia-smi

The following information appears to indicate that the installation is correct, and the red box is the highest version of cuda that can be supported

1.2 Install CUDA

Download CUDA Toolkit

https://developer.nvidia.com/cuda-toolkit-archive

Select the highest supported version or lower

After selecting the system, etc., click Download

All settings are default, go directly to the next step

There may be failures during the installation process. If there is a problem with c++, you can install Visual Studio. I installed the 2022 community edition.

https://visualstudio.microsoft.com/zh-hans/vs/

Installation options are as follows

Reinstall CUDA after installation is complete

1.3 Install CUDNN

download cudnn

https://developer.nvidia.com/rdp/cudnn-download

Select the corresponding package to download

Unzip after download is complete

Put it in the corresponding directory under this path

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7

1.4 Verify CUDA

terminal input

nvcc -V

The following figure shows that the installation is correct, and the red box is the version

1.4 Install Anaconda

Download Anaconda

https://www.anaconda.com/

Download Anaconda for the corresponding system, taking windows as an example

The installation path can be customized, and others can be defaulted. It should be noted that you need to check the following to avoid the trouble of configuring environment variables

terminal input

conda activate base

Enter the base environment, the following instructions appear to be installed correctly

1.5 Use Anaconda to create a [yolov5] runtime environment

1.5.1 Install pytorch-GPU

Enter the following command in the terminal to create a yolov5 environment, python version 3.8 (yolov5 requires python version > 3.7, 3.8 is recommended here)

conda create -n yolov5 python=3.8

Press Enter and yes all the way to install successfully. The command to enter the environment is as follows, and the rest of the conda commands can be searched by themselves

conda activate yolov5

The rest of the conda commands can be searched by Baidu

Open pytorch official website

https://www.pytorch.org

Select the pytorch corresponding to the operating system and CUDA version (the CUDA version here cannot be higher than the CUDA version viewed above)

Copy the command in Run this Command (it is recommended to use the following command to solve the version problem) to enter the yolov5 environment of Anaconda, enter the command, and then wait for the installation to complete. If the installation fails, you can change the source to solve it yourself

conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch

conda add Tsinghua source

conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge 
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/

# Set show channel address when searching 
conda config -- set show_channel_urls yes

After the installation is complete, enter in the yolov5 environment

python

Enter into python and enter

import torch
print(torch.cuda.is_available())
print(torch.backends.cudnn.is_available())

Running shows as indicates that the installation is correct

True 
True

1.5.2 Install the packages required for yolov5 to run

Get the rknn friendly version of yolov5

git clone https://github.com/airockchip/yolov5.git

Enter the root directory of the yolov5 project in Ancaonda’s yolov5 environment, pip install the package, -i is faster to use Tsinghua source

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

Wait for the installation to complete

The packages it contains are as follows. Make sure to complete the installation of the packages included in base (packages required for training and running), logging (logs), and export (model export and format conversion):

# pip install -r requirements.txt

# base ----------------------------------------
Cython
matplotlib>=3.2.2
numpy>=1.18.5
opencv-python>=4.1.2
pillow
PyYAML>=5.3
scipy>=1.4.1
tensorboard>=2.2
torch>=1.6.0
torchvision>=0.7.0
tqdm>=4.41.0

# logging -------------------------------------
# wandb

# coco ----------------------------------------
# pycocotools>=2.0

# export --------------------------------------
# coremltools==4.0
# onnx>=1.8.0
# scikit-learn==0.19.2  # for coreml quantization

# extras --------------------------------------
# thop  # FLOPS computation
# seaborn  # plotting

If there is a version incompatibility problem, you can modify the version of the package by yourself (if the minimum requirements are met)

2. Yolov5 project modification

2.1 Modify models/yolo.py to solve the error:

Line 153 or so:

def _initialize_biases(self, cf=None):  # initialize biases into Detect(), cf is class frequency
        # https://arxiv.org/abs/1708.02002 section 3.3
        # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
        m = self.model[-1]  # Detect() module
        for mi, s in zip(m.m, m.stride):  # from
            b = mi.bias.view(m.na, -1)  # conv.bias(255) to (3,85)
            b[:, 4] += math.log(8 / (640 / s) ** 2)  # obj (8 objects per 640 image)
            b[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum())  # cls
            mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)

change into:

def _initialize_biases(self, cf=None):  # initialize biases into Detect(), cf is class frequency
        # https://arxiv.org/abs/1708.02002 section 3.3
        # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
        m = self.model[-1]  # Detect() module
        for mi, s in zip(m.m, m.stride):  # from
            b = mi.bias.view(m.na, -1)  # conv.bias(255) to (3,85)
            with torch.no_grad():
                b[:, 4] += math.log(8 / (640 / s) ** 2)  # obj (8 objects per 640 image)
                b[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum())  # cls
            mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)

Solve the error problem

2.2 Modify utils/datasets.py:

Here is to be compatible with your own yolov4 dataset, you can do it without modification

Line 342 or so:

def img2label_paths(img_paths):
            # Define label paths as a function of image paths
            sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep  # /images/, /labels/ substrings
            return [x.replace(sa, sb, 1).replace(os.path.splitext(x)[-1], '.txt') for x in img_paths]

change into:

def img2label_paths(img_paths):
            # Define label paths as a function of image paths
            sa, sb = os.sep + 'JPEGImages' + os.sep, os.sep + 'labels' + os.sep  # /images/, /labels/ substrings
            return [x.replace(sa, sb, 1).replace(os.path.splitext(x)[-1], '.txt') for x in img_paths]

In order to be compatible with the data set used by yolov4, to achieve the purpose of the common data set of v4 and v5

2.3 Create a new data.yaml:

Create my,yaml in the data directory, the content is as follows (this file is the information of the dataset), you can set your own data.

# Training, validation, test set paths, where xxx represents a dataset in the data disk 
train: /media/ubuntu/data/datasets/xxx/2007_train.txt 
val: /media/ubuntu/data/datasets/xxx/2007_train .txt 
test: /media/ubuntu/data/datasets/xxx/2007_train.txt 
# Number of classes 
nc: 2   
# Class 
names: ['red_jeep', 'missile_vehicle']

2.4 New cfg.yaml

Create rknn.yaml in the data directory with the following content (this file is the model structure configuration file)

Since the structure of the project version uses an older module, we modified the backbone and head: structure of the yolov5 version 6.0 to improve performance. Because the output of version 6.0 is incompatible with rknn, the detect layer of the project is used to solve this problem. . Of course, you can use the configuration file of the original project

nc: 2   # number of classes 
depth_multiple: 0.33   # depth magnification 
width_multiple: 0.50   # channel magnification
anchors:
  - [ 10 , 13 , 16 , 30 , 33 , 23 ]   # P3 / 8 
  - [ 30 , 61 , 62 , 45 , 59 , 119 ]   # P4 / 16 
  - [ 116 , 90 , 156 , 198 , 373 , 326 ]   # P5 / 32

# YOLOv5 v6.0 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, C3, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 6, C3, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, C3, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 3, C3, [1024]],
   [ -1, 1, SPPF, [1024, 5 ]],   #9
  ]

# YOLOv5 v6.0 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, C3, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

2.5 Add SPPF module

Open the common.py file under the models file and add the SPPF class. The activation function under this file has been changed to Relu, so this operation is no longer required.

class SPPF(nn.Module):
    # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
    def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
        super().__init__()
        c_ = c1 // 2  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_ * 4, c2, 1, 1)
        self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)

    def forward(self, x):
        x = self.cv1(x)
        with warnings.catch_warnings():
            warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
            y1 = self.m(x)
            y2 = self.m(y1)
            return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1))

2.6 Add SPPF module in model construction

Modify the def parse_model function line 219 in models/yolo.py

if m in [Conv, Bottleneck, SPP, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3]:

change into

if m in [Conv, Bottleneck, SPP, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3, SPPF]:

2.7 Error tensor cannot be converted to numpy

This error is caused by the overly high version of numpy

You can uninstall the higher version of numpy first

pip uninstall numpy

Then install numpy1.18.5 to solve the problem

pip install numpy==1.18.5 -i https://pypi.tuna.tsinghua.edu.cn/simple

2.8 Without Pretrained Weights

Modify train.py to remove the use of pre-trained models

parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path')

change to:

parser.add_argument('--weights', type=str, default='', help='initial weights path')

2.9 If the page file is too small to complete the operation

OSError: [WinError 1455 ] The page file is too small to complete the operation. Error loading "D:\anaconda3\envs\Yolo\lib\site-packages\torch\lib\shm.dll"  or one of its dependencies.

Modify the virtual memory corresponding to the drive letter, the operation is as follows, the size can be determined according to the situation:

Restart your computer and start training again

3.yolov5 model training

In Anaconda’s yolov5, enter the yolov5 root directory, the terminal runs as follows, select the model configuration file and data set file to start training

python --cfg models/rknn.yaml --data/my.yaml

After the training is completed, you can view the training results in the runs folder. The weights saves the model at each stage, and results.png displays the training results.

4.pt format→onnx format

In Anaconda’s yolov5, enter the yolov5 root directory, and the terminal runs as follows

python models/ export .py --weight xx.pt # Here is the relative path of the model that needs to be converted, of course it can also be an absolute path such as runs/train/exp/weight/best.pt

If the module does not exist, just install it according to requirements.txt

5.onnx format→rknn format

5.1 Virtual Machine Environment Installation

Since rknn-toolkit2 currently only supports linux, a virtual machine needs to be used (rknn-toolkit does not support 3588)

The virtual machine software here is VMware, and the system is Ubuntu 18.04. Since the vm-toolkit cannot drag files after the 20.04 version is installed, it is recommended to use 18.04, and the installation of the virtual machine will not be repeated.

5.1.1 Install Anaconda

The official file contains the docker image, which can be directly pulled and used. We use the traditional method here.

Since Ubuntu18.04 comes with python3.6, in order to avoid conflicts and simplify operations, we continue to use Anaconda to create the rknn environment

https://www.anaconda.com/

Download the corresponding operating system, put it in the main directory after downloading, run it, press Enter all the way to complete

sudo sh ./Anaconda3-2022 . 05 -Linux-x86_64.sh

Note: Finally ask whether to add to the environment variable, reply yes

Re-open the terminal and you can see that the installation is successful

5.1.2 Anaconda creates a rknn environment and installs rknn-toolkit2

rknn-toolkit2 has two packages of python3.6 and python3.8, we choose python3.8 here

conda create -n rknn puthon=3.8

Get the rknn-toolkit2 installation package and enter the root directory of the project

git clone https://github.com/rockchip-linux/rknn-toolkit2.git
cd rknn-toolkits-1.3.0

Install the dependencies of rknn-toolkit2 in Anaconda’s rknn environment

pip3 install -r requirements_cp38-1.3.0.txt  -i https://pypi.tuna.tsinghua.edu.cn/simple

Install rknn-toolkit2, if there is a problem of version mismatch, you can uninstall the package and reinstall other versions

pip3 install package/rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl

Test if the installation was successful

cd /examples/onnx/yolov5
python3 test.py

Running correctly indicates correct installation

5.2 Model Conversion

Get the official demo

git clone https://github.com/rockchip-linux/rknpu2.git

Enter the yolov5 model conversion directory

cd /home/ptay/rknpu2-master/examples/rknn_yolov5_demo/convert_rknn_demo/yolov5

Put and convert the onnx model in the onnx directory

Open onnx2rknn.py and modify it as follows:

​ 1. Target platform name

platform = 'rk3588'

​ 2. The onnx model that needs to be transformed

MODEL_PATH = './onnx_models/best.onnx'

Then run it in the rknn environment, a new rknn directory and the rknn model in its directory will be generated, and then the name can be modified, or it can be modified in the py file, not the point

python3 onnx2rknn.py

6.3588 Platform Deployment

Get the official demo on the main directory of 3588

git clone https://github.com/rockchip-linux/rknpu2.git

Enter the yolov5 directory

cd /home/ptay/rknpu2-master/examples/rknn_yolov5_demo

Modify the header file postprocess.h in the include file

#define OBJ_CLASS_NUM 2 #The number here is modified to the number of classes in the dataset

Modify the coco_80_labels_list.txt file in the model directory, change it to your own class and save it

red_jeep
missile_vehicle

Put the converted rknn file in the model/RK3588 directory

compile, run the shell

bash ./build-linux_RK3588.sh

After success, the install directory is generated

cd install/rknn_yolov5_demo_linux

Put the pictures that need to be reasoned in the model directory

run

./rknn_yolov5_demo ./model/RK3588/best.rknn ./model/test.jpg

The results were obtained in rknn_yolov5_demo_linux, because of the confidentiality of the data, I used another data set to show the results

Other videos and camera reasoning can modify the official demo implementation by themselves

Leave a Reply

Your email address will not be published.