March 10 - Jetson AI Lab Research Group Call - Lightning talks

| Podcasts | May 04, 2026 | 55 views | 55:28

TL;DR

This Jetson AI Lab Research Group call features lightning talks on open-source hardware for remote Jetson access, a real-time emotional AI engine for robots running entirely on Jetson Nano, and updates to the Jetson AI Lab model repository with new performance benchmarks and deployment guides.

💻 Remote Device Management 3 insights

JetKVM enables network-based GUI control of Jetson devices

The open-source KVM device connects via USB and HDMI to provide remote virtual keyboard access over Ethernet, allowing developers to control Jetson systems without direct physical connection.

Hardware compatibility varies across Jetson models

JetKVM works seamlessly with Jetson Orin but experiences HDMI compatibility issues with Jetson Nano due to port version inconsistencies, requiring workarounds for broader support.

Security trade-offs differentiate KVM solutions

Unlike NanoKVM which faced privacy concerns over embedded microphones, JetKVM offers network connectivity preferred for corporate environments, though some users prefer USB-only alternatives for air-gapped security.

🤖 Edge AI & Emotional Robotics 3 insights

Real-time affect engine runs locally on Jetson Nano

Daniel Richie's system analyzes conversational text using VAD (Valence, Arousal, Dominance) psychology models to derive emotional states in under 5 milliseconds without cloud connectivity.

Emotional states drive physical robot behaviors

The engine maps detected moods to robotic movements and programmable LED eye colors using cinematography principles, with fast (<5ms) reactive loops and slow (1.5-2s) baseline correction loops.

Modular architecture supports diverse hardware

Published as a pip package, the affect engine can process text from any source and output to any compatible device, making it adaptable beyond the Reachy Mini robot demonstration.

📊 Developer Resources & Community 2 insights

Jetson AI Lab page adds performance benchmarks

The refreshed repository now includes tested open-source models with detailed TPS (transactions per second) metrics across different Jetson hardware variants and concurrency levels.

Community-driven model testing requested

NVIDIA actively solicits community input to identify popular models needing Jetson support, offering to create detailed deployment guides and validation for both NVIDIA and third-party models.

Bottom Line

The Jetson ecosystem enables sophisticated edge AI applications—from emotional robotics to remote device management—entirely on local hardware, with community collaboration driving the expansion of supported models and use cases.

More from NVIDIA AI Podcast

View all
Apr 14 - Jetson AI Lab Research Group Call - Tensor RT Edge LLM on Jetson & Culture
51:38
NVIDIA AI Podcast NVIDIA AI Podcast

Apr 14 - Jetson AI Lab Research Group Call - Tensor RT Edge LLM on Jetson & Culture

NVIDIA researchers Lynn Chai and Luc introduce TensorRT Edge LLM, a purpose-built inference engine for deploying large language models on Jetson edge devices, showcasing NVFP4 quantization and speculative decoding techniques that achieve up to 7x faster prefill speeds and 500 tokens per second generation while previewing a simplified vLLM-style Python API coming soon.

about 9 hours ago · 10 points
Feb 10 - Jetson AI Lab Research Group Call - Drones on Jetson & Isaac Lab on DGX Spark
57:34
NVIDIA AI Podcast NVIDIA AI Podcast

Feb 10 - Jetson AI Lab Research Group Call - Drones on Jetson & Isaac Lab on DGX Spark

Cameron Rose presents 'Operation Squirrel,' an autonomous drone project using Jetson Orin Nano for real-time target tracking and dynamic payload delivery. The system uses a modular C++ software stack with TensorRT-optimized YOLO and OSNet running at 21 FPS, communicating via UART with a flight controller to maintain following distance through velocity commands.

about 9 hours ago · 9 points