🖥️ EdgeBox AI¶
In Stock Edge AI Computer
AI inference where it matters — at the machine. The EdgeBox AI is powered by the NVIDIA Jetson Orin NX, delivering up to 100 TOPS of AI performance in a compact, fanless enclosure. Run real-time object detection, quality inspection, and anomaly detection without sending data to the cloud.
Quick Start¶
⚡ Running your first AI model in 20 minutes
1. Flash JetPack OS to the NVMe SSD 2. Connect power (12V DC), HDMI, and USB keyboard 3. Complete the NVIDIA initial setup wizard 4. Run the included ElephantAI demo (object detection on USB webcam)Technical Specifications¶
| AI & Compute | |
|---|---|
| Module | NVIDIA Jetson Orin NX 16GB |
| AI Performance | 100 TOPS |
| GPU | 1024-core NVIDIA Ampere GPU |
| CPU | 8-core ARM Cortex-A78AE @ 2.0 GHz |
| RAM | 16 GB LPDDR5 (unified memory) |
| Storage | |
| NVMe Slot | M.2 2280 PCIe Gen 4 (up to 2TB) |
| eMMC | 64 GB (OS) |
| I/O | |
| USB | 4 × USB 3.2 Gen 2 (10 Gbps) |
| Ethernet | 2 × 2.5GbE |
| Video In | 2 × MIPI CSI-2 (4K camera) |
| Video Out | 1 × HDMI 2.1, 1 × DP 1.4 |
| Serial | 1 × RS485, 1 × RS232 |
| Power & Physical | |
| Input Voltage | 12V DC (5.5/2.5mm barrel) |
| Typical Power | 15–35W (AI workload dependent) |
| Dimensions | 160 × 110 × 55 mm |
| Cooling | Passive aluminium heatsink (fanless) |
| Operating Temperature | -10°C to +55°C |
OS Setup¶
Flash JetPack¶
The EdgeBox AI uses NVIDIA JetPack 6.0 (Ubuntu 22.04 based):
# Use NVIDIA SDK Manager on a Linux host PC
# https://developer.nvidia.com/sdk-manager
# Or flash via USB with the included recovery cable
sudo ./flash.sh jetson-orin-nx-16 mmcblk0p1
JetPack version
Always use JetPack 6.0 or later for Orin NX support. Earlier versions do not support the Orin architecture.
AI Inference¶
Run the ElephantAI demo¶
# Clone the demo repo
git clone https://github.com/elephantronics/edgebox-ai-demo
cd edgebox-ai-demo
# Install dependencies
pip3 install -r requirements.txt
# Run real-time object detection (YOLOv8)
python3 detect.py --source /dev/video0 --model yolov8n.engine
Convert your own model to TensorRT¶
# Install torch2trt
pip3 install torch2trt
# Convert ONNX → TensorRT engine
trtexec --onnx=your_model.onnx \
--saveEngine=your_model.engine \
--fp16 \
--workspace=4096
Benchmark inference speed¶
| Model | Precision | FPS (EdgeBox AI) |
|---|---|---|
| YOLOv8n | FP16 | ~210 fps |
| YOLOv8m | FP16 | ~85 fps |
| YOLOv8x | FP16 | ~28 fps |
| ResNet-50 | INT8 | ~480 fps |
Camera Input¶
Any UVC-compatible USB camera works out of the box:
Connect a compatible MIPI camera (e.g. IMX477) to CSI port:
Troubleshooting¶
| Issue | Fix |
|---|---|
| High temperature warning | Check ambient temperature; ensure heatsink fins are unobstructed |
| TensorRT conversion fails | Ensure JetPack and TensorRT versions match; clear /tmp/ cache |
| Camera not detected | Check USB port (use USB 3.2 port); try v4l2-ctl --list-devices |