English|中文
This application runs on the Atlas 200 DK or the AI acceleration cloud server to implement the inference function by using a semantic segmentation network.
The applications in the current version branch adapt to DDK&RunTime 1.32.0.0 and later.
Before deploying this sample, ensure that:
- Mind Studio has been installed.
- The Atlas 200 DK developer board has been connected to Mind Studio, the cross compiler has been installed, the SD card has been prepared, and basic information has been configured.
You can use either of the following methods:
-
Quick deployment: visit https://github.com/Atlas200dk/faster-deploy.
- The quick deployment script can be used to deploy multiple samples rapidly. Select segmentation.
- The quick deployment script automatically completes code download, model conversion, and environment variable configuration. To learn about the detailed deployment process, go to 2. Common deployment.
-
Common deployment: visit https://github.com/Atlas200dk/sample-README/tree/master/sample-segmentation.
- In this deployment mode, you need to manually download code, convert models, and configure environment variables.
-
Open the project.
Go to the directory that stores the decompressed installation package as the Mind Studio installation user in CLI mode, for example, $HOME/MindStudio-ubuntu/bin. Run the following command to start Mind Studio:
./MindStudio.sh
Open the sample-segmentation project, as shown in Figure 1.
-
Configure project information in the src/param_configure.conf file.
Figure 2 Configuration file path
The default configurations of the configuration file are as follows:
remote_host=192.168.1.2 model_name=Fcn8s.om
- remote_host: IP address of the Atlas 200 DK developer board
- model_name: offline model name
- All the parameters must be set. Otherwise, the build fails.
- Do not use double quotation marks ("") during parameter settings.
- You can type only one model name in the configuration file. The FCN8s model is used as an example. You can replace it with a model listed in the common deployment by referring to the operation procedure.
- Modify the default configurations as required.
-
Run the deploy.sh script to adjust configuration parameters and download and compile the third-party library. Open the Terminal window of Mind Studio. By default, the home directory of the code is used. Run the deploy.sh script in the background to deploy the environment, as shown in Figure 3.
Figure 3 Running the deploy.sh script
- During the first deployment, if no third-party library is used, the system automatically downloads and builds the third-party library, which may take a long time. The third-party library can be directly used for the subsequent build.
- During deployment, select the IP address of the host that communicates with the developer board. Generally, the IP address is the IP address configured for the virtual NIC. If the IP address is in the same network segment as the IP address of the developer board, it is automatically selected for deployment. If they are not in the same network segment, you need to manually type the IP address of the host that communicates with the Atlas DK to complete the deployment.
-
Start the build. Open Mind Studio and choose Build > Build > Build-Configuration from the main menu. The build and run folders are generated in the directory, as shown in Figure 4.
Figure 4 Build and file generation
Notes:
When you build a project for the first time, Build > Build is unavailable. You need to choose Build > Edit Build Configuration to set parameters before the build.
-
Copy the image to be inferred to the $HOME/AscendProjects/sample-segmentation/run/out directory.
The FCN model is tested using the sample images in the /sample-segmentation/ImageNetRaw folder, and the ERFNet model is tested using the sample images in the /sample-segmentation/ImageCity folder. Copy the required folder to the corresponding location on the developer board.
The image requirements are as follows:
- Format: jpg, png, and bmp
- Width of the input image: an integer ranging from 16px to 4096px
- Height of the input image: an integer ranging from 16px to 4096px
-
On the toolbar of Mind Studio, click Run and choose Run > Run 'sample-segmentation'. As shown in Figure 5, the executable application is running on the developer board.
You can ignore the error information reported during the execution because Mind Studio cannot transfer parameters for an executable application. In the preceding steps, the executable application and dependent library files are deployed to the developer board. You need to log in to the developer board in SSH mode and manually execute the files in the corresponding directory. For details, see the following steps.
-
Log in to the host side as the HwHiAiUser user in SSH mode on Ubuntu Server where Mind Studio is located.
ssh HwHiAiUser@host_ip
For the Atlas 200 DK, the default value of host_ip is 192.168.1.2 (USB connection mode) or 192.168.0.2 (NIC connection mode).
-
Go to the path of the executable files of the semantic segmentation network application.
Command example:
cd /home/HwHiAiUser/HIAI_PROJECTS/workspace_mind_studio/sample-segmentation_xxxxx/out
- In this path, _xxxxx _in sample-segmentation_xxxxx is a combination of letters and digits generated randomly each time the application is built.
-
Run the application.
Run the run_segmentation.py script to save the images which are generated by inference to the specified path.
Command example:
python3 run_segmentation.py -w 500 -h 500 -i ./example.jpg -c 21
- -w/model_width: width of the input image of a model. The value is an integer ranging from 16 to 4096.
- -h/model_height: height of the input image of a model. The value is an integer ranging from 16 to 4096.
- -i/input_path: path of the input image. It can be a directory, indicating that all images in the current directory are used as input. (Multiple inputs can be specified).
- -o/output_path: location of the model inference result image.
- -c/output_categories: category of each pixel in the model inference result. The value is 21 for the FCN model and 19 for the ERFNet model.
-
For other parameters, run the python3 run_segmentation.py --help command. For details, see the help information.