教材網址:https://github.com/rkuo2000/EdgeAI-AMB82mini
LLM服務器程式範例: [AmebaPro2_server/*.py](https://github.com/rkuo2000/EdgeAI-AMB82mini/blob/main/AmebaPro2_server)
Arduino 程式範例: [Arduino/AMB82-mini/](https://github.com/rkuo2000/EdgeAI-AMB82mini/blob/main/Arduino/AMB82-mini)
自強基金會 WiFi
SSID: TCFSTWIFI.ALL
Pass: 035623116
32-bit Arm v8M, up to 500MHz, 768KB ROM, 512KB RAM, 16MB Flash (MCM embedded DDR2/DDR3L up to 128MB)
802.11 a/b/g/n WiFi 2.4GHz/5GHz, BLE 5.1, NN Engine 0.4 TOPS, Crypto Engine, Audio Codec, …
Hub8735 ultra
https://raw.githubusercontent.com/ideashatch/HUB-8735/main/amebapro2_arduino/Arduino_package/ideasHatch.json
AMB82-mini
main https://github.com/Ameba-AIoT/ameba-arduino-pro2/raw/main/Arduino_package/package_realtek_amebapro2_index.json
dev https://github.com/Ameba-AIoT/ameba-arduino-pro2/raw/dev/Arduino_package/package_realtek_amebapro2_early_index.json
Tools > Board Manager > AMB82 package > 4.0.9
Serial-monitor = 115200
baud
首先將AMB82-mini板子用MicroUSB線 連接至電腦的USB port
確認UART com port (Ubuntu OS需 sudo chown usrname /dev/ttyUSB0
)
燒錄程式碼:
Upload
C:\Users\user\AppData\Local\Arduino15\packages\realtek\hardware\AmebaPro2\4.0.9-build20250805\libraries
C:\Users\user\AppData\Local\Arduino15\packages\realtek\hardware\AmebaPro2\4.0.9-build20250805\libraries\NeuralNetwork\src
GenAI.h
GenAI.cpp
Examples> 01.Basics > Blink
Examples> 02.Digitial > GPIO > Button
程式碼修改:
const int buttonPin = 1
; // the number of the pushbutton pin
const int ledPin = LED_BUILTIN
; // the number of the LED pin
Examples> 01.Basic > AnalogReadSerial
程式碼修改:Serial.begin(115200
);
文件/Arduino/AMB82-mini
Examples> WiFi > SimpleTCPServer
WiFi - Simple TCP Server
Examples> WiFi > SimpleHttpWeb > ReceiveData
WiFi - Simple Http Server to Receive Data
Examples> WiFi > SimpleHttpWeb > ControlLED
WiFi - Simple Http Server to Control LED
Sketchbook> AMB82-mini > WebServer_ControlLED
Sketchbook> WebServer_ControlLED
Exmples> AmebaBLE > BLEV7RC_CAR_VIDEO
MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT)
How MQTT Works -Beginners Guide
Examples> AmebaMQTTClient > MQTT_basic
MQTT - Set up MQTT Client to Communicate with Broker
pip install paho.mqtt
publish messages to AMB82-mini
import paho.mqtt.publish as publish
host = "test.mosquitto.org"
publish.single("ntou/edgeai/robot1", "go to the kitchen", hostname=host)
subsribe messages from AMB82-mini
import paho.mqtt.subscribe as subscribe
host = "test.mosquitto.org"https://github.com/rkuo2000/EdgeAI-AMB82mini
msg = subscribe.simple("ntou/edgeai/robot1", hostname=host)
print("%s %s" % (msg.topic, msg.payload.decode("utf-8")))
Google Gemini +Canvas
Prompt: make an html to input MQTT topic and text to publish through Paho-MQTT test.mosquitto.org
Datasheet: VL53L0X - Time-of-Flight ranging sensor
Sketchbook> AMB82-mini > IR_VL53L0X
Sketchbook > AMB82-mini > MPU6050-DMP6v12
Examples> AmebaAnalog > PWM_ServoControl
myservo.attach(8);
myservo.write(pos);
TB6612
DRV8833
Sketchbook> AMB82-mini > HTTP_Post_ImageText_TFTLCD
Exmples> AmebaSPI > Camera_2_lcd
Camera output , then Jpeg Decoder to TFT-LCD
compilation error: need to modify Libraries/TJpg_Decoder/src/User_Config.h
//#define TJPGD_LOAD_SD_LIBRARY
Exmples > AmebaSPI > Camera_2_Lcd_JPEGDEC
Camera output, saved to SDcard, then Jpeg Decoder to read to TFT-LCD
Exmples > AmebaSPI > LCD_Screen_ILI9341_TFT
LCD Draw Tests
Examples> AmebaMultimedia > StreamRTSP > VideoOnly
Sketchbook> RTSP_VideoOnly
Examples> AmebaMultimedia > MotionDetection > LoopPostProcessing
Examples> AmebaMultimedia > MotionDetection > MotionDetectGoogleLineNotify
Audio & Mic
Examples> AmebaMultimedia > Audio >LoopbackTest
AMB82-mini + PAM8403 + 4ohm 3W speaker
Sketchbook> AMB82-mini > SDCardPlayMP
Sketchbook> AMB82-mini > SDCardPlayMP_All
Examples> AmebaMultimedia > Audio > RTSPAudioStream
Examples> AmebaMultimedia > RecordMP4 > AudioOnly
Examples> AmebaNN > AudioClassification
Examples> AmebaNN > RTSPFaceDetection
Examples> AmebaNN > RTSPFaceRecognition
Serial_monitor: REG=RKUO
DEL=SAM
RTSP_GarbageClassification.ino
required in kaggle for AmebaPro2 1) pip install tensorflow==2.14.1 2) model.save(‘garbage_cnn.h5’, include_optimizer=False)
Output
network_binary.nb
network_binary.nb
imgclassification.nb
Kaggle範例:
1) repro https://github.com/WongKinYiu/yolov7
2) create pothole.yaml
%%writefile data/pothole.yaml
train: ./Datasets/pothole/train/images
val: ./Datasets/pothole/valid/images
test: ./Datasets/pothole/test/images
# Classes
nc: 1 # number of classes
names: ['pothole'] # class names
3) YOLOv7-Tiny Fixed Resolution Training
!sed -i "s/nc: 80/nc: 1/" cfg/training/yolov7-tiny.yaml
!sed -i "s/IDetect/Detect/" cfg/training/yolov7-tiny.yaml
best.pt
from kaggle.com/rkuo2000/yolov7-potholebest.zip
network_binary.nb
network_binary.nb
yolov7_tiny.nb
RTSP_YOLOv7_Pothole
RTSP_YOLOv7_Sushi
RTPS_ObjectDetection_AudioClassification.ino
Download ffmpeg-master-latest-win64-gpl.zip, extract & put ffmpeg.exe into where you run Whisper server.
Examples: AmebaNN > MultimediaAI > GenAIVision
Examples: AmebaNN > MultimediaAI > GenAISpeech_Gemini
Examples: AmebaNN > MultimediaAI > TextToSpeech