DreamFlow

Local Navigation Beyond Observation via Conditional Flow Matching in the Latent Space

ICRA 2026

1Robotics Program, KAIST, 2School of Electrical Engineering, KAIST, 3URobotics, 4KRAFTON, 5UC Berkeley
*Equal contribution   Corresponding author

Abstract

Local navigation in cluttered environments often suffers from dense obstacles and frequent local minima. Conventional local planners rely on heuristics and are prone to failure, while deep reinforcement learning (DRL)-based approaches provide adaptability but are constrained by limited onboard sensing. These limitations lead to navigation failures because the robot cannot perceive structures outside its field of view.

In this paper, we propose DreamFlow, a DRL-based local navigation framework that extends the robot's perceptual horizon through conditional flow matching (CFM). The proposed CFM-based prediction module learns probabilistic mapping between local height map latent representation and broader spatial representation conditioned on navigation context. This enables the navigation policy to predict unobserved environmental features and proactively avoid potential local minima.

Experimental results demonstrate that DreamFlow outperforms existing methods in terms of latent prediction accuracy and navigation performance in simulation. The proposed method was further validated in cluttered real-world environments with a quadrupedal robot.

DreamFlow vs. Baseline

DreamFlow predicts terrain beyond the sensor range, enabling collision-free navigation where the baseline fails.

Baseline

DreamFlow (Ours)

Method

The overall architecture of DreamFlow is designed as an asymmetric actor-critic framework. The actor uses a local height map to derive a local environmental latent representation. The pre-trained CFM module then predicts an extended latent vector—representing terrain beyond the sensor range—conditioned on the robot's proprioceptive context. The navigation policy takes both the local and predicted extended latent as input to produce velocity actions, while a pre-trained locomotion policy serves as a low-level controller.

DreamFlow Framework

CFM Training Pipeline

The CFM training pipeline collects latent pairs from local and extended height maps using pre-trained VAE encoders. The velocity field learns to transport the local latent towards the extended latent, conditioned on the proprioceptive context. This enables the model to "dream" about unseen terrain from partial observations.

CFM Training Pipeline

Simulation Environment

The simulation environment was built using IsaacGym. During training, obstacles of varying sizes are randomly distributed on a flat terrain. We designed two evaluation environments—Maze and Hallway—to test the robot's ability to avoid local minima and navigate through confined spaces.

Sim Environment 1 Sim Environment 2 Sim Environment 3 Sim Environment 4 Sim Environment 5 Sim Environment 6 Maze Environment Hallway Environment
Experimental Setup

Height Map Visualization

DreamFlow extends the robot's perceptual horizon by predicting latent representations of terrain beyond the onboard sensor range. Below we visualize the local height map (limited sensor range), extended height map (privileged ground truth), and the combined visualization.

Local Height Map

Extended Height Map

Combined Visualization

Simulation Results

We compared DreamFlow against three baselines across Maze (Easy/Hard) and Hallway environments. DreamFlow achieves the highest success rate (SR) and lowest collision rate (CR), demonstrating its ability to avoid local minima and collisions.

Method Maze (Easy) Maze (Hard) Hallway
SR↑SPL↑CR↓ SR↑SPL↑CR↓ SR↑SPL↑CR↓
Baseline 83.20.233.9 76.50.3754.8 35.80.215.1
Zhang et al. 95.30.332.5 5.40.0315.6 25.10.124.9
Diffusion 88.40.283.1 68.90.3243.9 33.90.1923.6
DreamFlow (Ours) 99.60.350.9 83.10.458.9 89.80.582.3

Trajectory Comparison

DreamFlow demonstrates smoother trajectories with better obstacle avoidance and more efficient path selection. Other methods show frequent obstacle contacts and often get stuck in local minima.

Trajectory Comparison across Maze and Hallway
Additional Simulation Results

Real-World Experiments

We validated DreamFlow on a Unitree Go2 quadrupedal robot equipped with two Livox Mid-360 LiDARs. We tested in two real-world environments: a narrow corridor with tight passages, and a cluttered environment with box obstacles and wall segments.

Real-world Experiment Snapshots

Narrow Corridor

The baseline frequently collides with walls at corners, while DreamFlow achieves collision-free navigation by anticipating corridor layouts beyond its immediate perception.

Baseline

DreamFlow (Ours)

Corridor Navigation

Side-by-side comparison from two camera angles in a corridor environment.

Baseline (Cam 1)

DreamFlow (Cam 1)

Baseline (Cam 2)

DreamFlow (Cam 2)

Cluttered Environment

In cluttered environments with box obstacles and wall segments, DreamFlow successfully navigates without collisions through predictive terrain modeling.

Baseline

DreamFlow (Ours)