Skip to content

🖥️ EdgeBox AI

In Stock  Edge AI Computer

AI inference where it matters — at the machine. The EdgeBox AI is powered by the NVIDIA Jetson Orin NX, delivering up to 100 TOPS of AI performance in a compact, fanless enclosure. Run real-time object detection, quality inspection, and anomaly detection without sending data to the cloud.


Quick Start

⚡ Running your first AI model in 20 minutes

1. Flash JetPack OS to the NVMe SSD 2. Connect power (12V DC), HDMI, and USB keyboard 3. Complete the NVIDIA initial setup wizard 4. Run the included ElephantAI demo (object detection on USB webcam)

Technical Specifications

AI & Compute
ModuleNVIDIA Jetson Orin NX 16GB
AI Performance100 TOPS
GPU1024-core NVIDIA Ampere GPU
CPU8-core ARM Cortex-A78AE @ 2.0 GHz
RAM16 GB LPDDR5 (unified memory)
Storage
NVMe SlotM.2 2280 PCIe Gen 4 (up to 2TB)
eMMC64 GB (OS)
I/O
USB4 × USB 3.2 Gen 2 (10 Gbps)
Ethernet2 × 2.5GbE
Video In2 × MIPI CSI-2 (4K camera)
Video Out1 × HDMI 2.1, 1 × DP 1.4
Serial1 × RS485, 1 × RS232
Power & Physical
Input Voltage12V DC (5.5/2.5mm barrel)
Typical Power15–35W (AI workload dependent)
Dimensions160 × 110 × 55 mm
CoolingPassive aluminium heatsink (fanless)
Operating Temperature-10°C to +55°C

OS Setup

Flash JetPack

The EdgeBox AI uses NVIDIA JetPack 6.0 (Ubuntu 22.04 based):

# Use NVIDIA SDK Manager on a Linux host PC
# https://developer.nvidia.com/sdk-manager

# Or flash via USB with the included recovery cable
sudo ./flash.sh jetson-orin-nx-16 mmcblk0p1

JetPack version

Always use JetPack 6.0 or later for Orin NX support. Earlier versions do not support the Orin architecture.


AI Inference

Run the ElephantAI demo

# Clone the demo repo
git clone https://github.com/elephantronics/edgebox-ai-demo
cd edgebox-ai-demo

# Install dependencies
pip3 install -r requirements.txt

# Run real-time object detection (YOLOv8)
python3 detect.py --source /dev/video0 --model yolov8n.engine

Convert your own model to TensorRT

# Install torch2trt
pip3 install torch2trt

# Convert ONNX → TensorRT engine
trtexec --onnx=your_model.onnx \
        --saveEngine=your_model.engine \
        --fp16 \
        --workspace=4096

Benchmark inference speed

Model Precision FPS (EdgeBox AI)
YOLOv8n FP16 ~210 fps
YOLOv8m FP16 ~85 fps
YOLOv8x FP16 ~28 fps
ResNet-50 INT8 ~480 fps

Camera Input

Any UVC-compatible USB camera works out of the box:

import cv2
cap = cv2.VideoCapture(0)  # /dev/video0
ret, frame = cap.read()

Connect a compatible MIPI camera (e.g. IMX477) to CSI port:

# GStreamer pipeline for CSI camera
pipeline = (
    "nvarguscamerasrc ! "
    "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1 ! "
    "nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! appsink"
)
cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)

Troubleshooting

Issue Fix
High temperature warning Check ambient temperature; ensure heatsink fins are unobstructed
TensorRT conversion fails Ensure JetPack and TensorRT versions match; clear /tmp/ cache
Camera not detected Check USB port (use USB 3.2 port); try v4l2-ctl --list-devices