NVIDIA Jetson Orin Nano Developer Kit
Supported | SDK | Provisioning |
---|---|---|
π’ Target actively maintained | π’ x86-64 and aarch64 container images | π‘ NVMe storage |
The Jetson Orin Nano Developer Kit delivers exceptional performance for real-time ML at the edgeβup to 67 TOPS of AI compute. Paired with Avocado OS, you can deploy full inference pipelines in minutes, without the typical headaches of cross-compiling or system reboots.
Whether you're building computer vision, robotics, or edge AI applications, this target gets you production-ready fast.
π Technical Specificationsβ
Component | Details |
---|---|
CPU | 6-core Arm Cortex-A78AE v8.2 (1.7 GHz) |
GPU | NVIDIA Ampere GPU with 1024 CUDA cores and 32 tensor cores |
AI Performance | Up to 67 TOPS (INT8) |
Memory | 8GB 128-bit LPDDR5 |
Memory Bandwidth | 102 GB/s |
Storage | 2 x M.2 Key M slots for PCIe NVMe SSDs |
Connectivity | Single M.2 Key E wireless module with Wi-Fi and Bluetooth |
Power Modes | 7W / 15W / 25W |
π Getting Startedβ
Get up and running with the Avocado Linux SDK in minutes.
Prerequisitesβ
- A Mac (macOS 10.12+) or Linux (Ubuntu 22.04+, Fedora 39+) development machine
- Docker installed
- 10GB+ available disk space
Installing and running the SDKβ
-
Install the Avocado CLI and append the location of the
avocado
executable to yourPATH
environmant variable. -
Create your project workspace:
mkdir avocado-jetson-orin-nano
- Initialize a new project:
cd avocado-jetson-orin-nano
avocado init
- Replace
qemux86-64
withjetson-orin-nano-devkit-nvme
astarget
inavocado.toml
:
[runtime.dev]
target = "jetson-orin-nano-devkit-nvme"
- Install all components (SDK, extensions, and runtime dependencies):
avocado install -f
βοΈ Provisioningβ
We are actively working on a provisioning guide for the Jetson Orin Nano Developer Kit.
π§° Hardware-in-the-Loop (HIL)β
We are actively working on Hardware-in-the-Loop (HIL) development for the Jetson Orin Nano Developer Kit.
π€ Deploying ML Inference with Tritonβ
With Avocado OS, you can deploy NVIDIAβs Triton Inference Server in just six commandsβno cross-compiling or reflashing required.
Why it matters:
- Model updates apply live, without device reboots or service restarts
- Works seamlessly with Avocadoβs OTA update infrastructure
- Supports Hardware-in-the-Loop (HIL) testing workflows
See how we built this at Open Source Summit β
π§ Target Roadmap/Known Limitationsβ
π’ GPU-accelerated ML inference is fully supported
π‘ GPU-accelerated video pipelines are under development
π‘ NVMe provisioning is under development
π‘ Hardware-in-the-Loop (HIL) debugging is under development
π Secure boot is not yet supported
π Full disk encryption is not yet supported