Feb 10 - Jetson AI Lab Research Group Call - Drones on Jetson & Isaac Lab on DGX Spark
TL;DR
Cameron Rose presents 'Operation Squirrel,' an autonomous drone project using Jetson Orin Nano for real-time target tracking and dynamic payload delivery. The system uses a modular C++ software stack with TensorRT-optimized YOLO and OSNet running at 21 FPS, communicating via UART with a flight controller to maintain following distance through velocity commands.
🚁 Hardware Architecture & Integration 3 insights
Dual-computer drone setup
The system uses a Jetson (Orin Nano or AGX Orin) for high-level decision making connected via UART to a dedicated flight controller handling motor commands, with the Jetson sending velocity vectors rather than direct motor controls.
Jetson upgrade improved detection range
Moving from the original Jetson Nano to Orin Nano enabled detecting human targets at 60 meters versus only 10 meters previously, while maintaining 21 FPS inference speed using TensorRT-optimized YOLOv8 small.
Power and weight constraints
The 1.5kg drone with 5000mAh battery achieves approximately 15 minutes of flight time while carrying the Jetson companion computer.
🧠 AI Perception & Tracking Stack 3 insights
Real-time inference pipeline
The perception stack uses TensorRT, CUDA, and OpenCV to run YOLO for object detection and OSNet for person re-identification, maintaining target IDs across frames to track specific individuals in cluttered environments.
Modular containerized deployment
Docker containers enable identical development environments across Windows (WSL), Jetson Orin Nano, and AGX Orin, allowing code to run across platforms with zero changes and easy model swapping.
SLAM limitations on edge hardware
ORB-SLAM3 testing on the Orin Nano resulted in a 2-second latency, making it unusable for real-time navigation, though CUDA-optimized alternatives might perform better.
⚙️ Control Logic & Development 3 insights
Bounding box to distance mapping
Instead of complex sensors, the system uses bounding box size as a proxy for range estimation, converting pixel dimensions to meters and feeding error signals into a P-controller to generate velocity commands.
Simulation-to-hardware workflow
Development uses an FTDI serial-to-USB device to connect the Jetson to an ArduPilot SITL simulator on a laptop, enabling safe testing of control logic before deploying to the physical drone.
Failsafe lessons from early flights
Initial tests revealed critical logic errors—such as subtracting zero when no target was detected resulting in maximum velocity commands—highlighting the need for robust failure handling in autonomous systems.
Bottom Line
For resource-constrained autonomous drones, use TensorRT-optimized models on Jetson Orin Nano to achieve real-time perception (21+ FPS), containerize your stack for cross-platform development, and implement simulation-to-hardware workflows with ArduPilot SITL to safely iterate control logic before flying.
More from NVIDIA AI Podcast
View all
Apr 14 - Jetson AI Lab Research Group Call - Tensor RT Edge LLM on Jetson & Culture
NVIDIA researchers Lynn Chai and Luc introduce TensorRT Edge LLM, a purpose-built inference engine for deploying large language models on Jetson edge devices, showcasing NVFP4 quantization and speculative decoding techniques that achieve up to 7x faster prefill speeds and 500 tokens per second generation while previewing a simplified vLLM-style Python API coming soon.
March 10 - Jetson AI Lab Research Group Call - Lightning talks
This Jetson AI Lab Research Group call features lightning talks on open-source hardware for remote Jetson access, a real-time emotional AI engine for robots running entirely on Jetson Nano, and updates to the Jetson AI Lab model repository with new performance benchmarks and deployment guides.
Jan 13: Jetson AI Lab Research Group Call - Accelerating Robotics with Isaac ROS on Jetson
NVIDIA's Isaac ROS team explains how their NITROS framework eliminates costly GPU memory copies in ROS 2 to enable a new era of "Physical AI" where end-to-end learned policies replace traditional robotic control, requiring tight integration of accelerated computing from simulation to deployment on Jetson.
Generating Performant 6G GPU-Accelerated Code From High-Level Programming Languages
NVIDIA's Aerial Framework enables 6G researchers to write radio access network algorithms in Python/JAX and compile them directly to GPU-accelerated TensorRT engines, eliminating the traditional rewrite-to-C++ bottleneck while meeting sub-500-microsecond real-time latency requirements for over-the-air testing.