Configuration Parsing Warning:Invalid JSON for config file config.json

pet-axera

This version of pet-axera has been converted to run on the Axera NPU using w8a16 quantization. It is trained with modified yolov5n to detect pet(dog & cat) in our life.

Supported Classes

This model is trained to detect the dogs and cats in our life with one label:

  1. pet

Compatible with Pulsar2 version: 5.2.

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through:

Support Platform

https://docs.m5stack.com/zh_CN/ai_hardware/AI_Pyramid-Pro

How to use

Download all files from this repository to the device.

python env requirement

pyaxengine

https://github.com/AXERA-TECH/pyaxengine

wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc2/axengine-0.1.3-py3-none-any.whl
pip install axengine-0.1.3-py3-none-any.whl

Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)

Input image:

run

python3 axmodel_infer_pet.py
root@ax650:~/pet-axera# python3 axmodel_infer_pet.py
[INFO] Available providers:  ['AxEngineExecutionProvider', 'AXCLRTExecutionProvider']
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Chip type: ChipType.MC50
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Engine version: 2.12.0s
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 5.2 eccb31f5
class: pet left:342 top:184 right:598 bottom:680 conf: 92%
class: pet left:660 top:293 right:943 bottom:878 conf: 95%
Saved res to axmodel_res.jpg

Output image:


Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including AXERA-TECH/Pet-axera