⭐If you use the project template, just modify it directly in folder.
You need to prepare some resource files with the typical file structure as follows:
my_resource
├── image
│ ├── my_image_1.png
│ └── my_image_2.png
├── model
│ └── ocr
│ ├── det.onnx
│ ├── keys.txt
│ └── rec.onnx
└── pipeline
├── my_pipeline_1.json
└── my_pipeline_2.json
You can modify the names of files and folders starting with "my_", but the others have fixed file names and should not be changed. Here's a breakdown:
The files in my_resource/pipeline
contain the main script execution logic and recursively read all JSON format files in the directory.
You can refer to the Task Pipeline Protocol for writing these files. You can find a simple demo for reference.
Tools:
- JSON Schema
- VSCode Extension
- Config resources based on
interface.json
- Support going to task definition, finding task references, renaming task, completing task, click to launch task
- Support launching as MaaPiCli
- Support screencap and crop image after connected
- Config resources based on
The files in my_resource/image
are primarily used for template matching images, feature detection images, and other images required by the pipeline. They are read based on the template
and other fields specified in the pipeline.
Please note that the images used need to be cropped from the lossless original image and scaled to 720p. If you use an Android emulator, please use the screenshot function that comes with the emulator! (You cannot directly take screenshots of the emulator window)
UNLESS YOU EXACTLY KNOW HOW MAAFRAMEWORK PROCESSES, DO USE THE CROPPING TOOLS BELOW TO OBTAIN IMAGES.
⭐If you use the project template, just follow its documentation and run configure.py
to automatically deploy the model file.
The files in my_resource/model/ocr
are ONNX models obtained from PaddleOCR after conversion.
You can use our pre-converted files: MaaCommonAssets. Choose the language you need and store them according to the directory structure mentioned above in Prepare Resource Files.
If needed, you can also fine-tune the official pre-trained models of PaddleOCR yourself (please refer to the official PaddleOCR documentation) and convert them to ONNX files for use. You can find conversion commands here.
-
We recommend using MaaDebugger.
-
If you use MaaPiCli, the
config/maa_option.json
file will be generated in the same directory, including:logging
: Save the log and generatedebug/maa.log
. Default true.recording
: Save recording function, which will save all screenshots and operation data during operation. You can useDbgController
for reproducible debugging. Default false.save_draw
: Saves the image recognition visualization results. All image recognition visualization results drawings during the run will be saved. Default false.show_hit_draw
: Displays the task hit pop-up window. Each time the recognition is successful, a pop-up window will display the recognition result. Default false.stdout_level
: The console displays the log level. The default is 2 (Error), which can be set to 0 to turn off all console logs, or to 7 to turn on all console logs.
-
If you integrate it yourself, you can enable debugging options through the
Toolkit.init_option
/MaaToolkitConfigInitOption
interface. The generated json file is the same as above.
You can integrate MaaFramework using MaaPiCli (Generic CLI) or by writing integration code yourself.
⭐If you use the project template, follow its documentation directly and run install.py
to automatically package the relevant files.
Use MaaPiCli in the bin
folder of the Release package, and write interface.json
and place it in the same directory to use it.
The Cli has completed basic function development, and more functions are being continuously improved! Detailed documentation needs to be further improved. Currently, you can refer to Sample to write it.
Examples:
Please refer to the Integration Documentation.
Examples: