HERO

Learning Humanoid End-Effector Control for
Open-Vocabulary Visual Loco-Manipulation

Content


HERO
Learning Humanoid End-Effector ContROl for
Open-Vocabulary Visual Loco-Manipulation

Runpei Dong*     Ziyan Li*     Xialin He     Saurabh Gupta    

Autonomous Visual Loco-Manipulation of Novel Objects in Novel Scenes

Office — "Mug"

Coffee Shop — "Starbucks coffee"

Kitchenette — "green apple"

Classroom — "cleaner bottle"

Building Lounge — "water bottle"

Robotics Lab — "kettle"

Kitchen — "spam"

Den — "purple book"

Corridor — "helicopter"

Office Lounge — "toy dog"

10 Daily Objects

red apple "red apple"
red coke can "red coke can"
emergency stop button "emergency stop button"
orange cube "orange cube"
olive oil bottle "olive oil bottle"
game cartridge "game cartridge"
chip can "chip can"
hand soap bottle "hand soap bottle"
robot hand "robot hand"
red piranha plant "red piranha plant"

"red apple"

Standard Table (0.74m)

Short Table (0.56m)

Clutterred Scenes

layout 1 "carrot"
layout 2 "broccoli"
layout 3 "orange"
layout 4 "chip can"
layout 5 "book"

"chip can"

Workspace Generalization

Visual Replanning

Extension to Door Opening Task

"fridge door handle"

Easy extension to door opening: The robot successfully opens a door via first grasping the door handle and then return the default pose.

Walk and Grasp

"stapler"

Walk and Grasp: The robot keeps walking forward until it sees the target object and stops for grasping. However, it indicates that the egocentric field of view is limited and the robot can only see objects when getting very close to objects.

Failure Modes

Abstract

Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (eg, RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (eg, mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with objects.

Method

Our HERO end-effector tracking control framework has following key components:

  1. Residual-aware Closed-Loop Tracking: Our whole-body EE tracking policy learns to track both end-effector pose residual and upper body joint angles for closed-loop control.
  2. Residual Neural Forward Models: Learned neural forward kinematics model and base odometry model for precise end-effector pose and target drift adjustment.
  3. Visual Perception Generalization: A modular system that leverages large vision models for generalized visual perception for open-vocabulary manipulation.
HERO End-Effector Tracking Policy
HERO Open-Vocabulary Loco-Manipulation System

BibTeX

@article{dong2026hero,
  title={Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation},
  author={Dong, Runpei and Li, Ziyan and He, Xialin and Gupta, Saurabh},
  journal={arXiv preprint arXiv:2602.16705},
  year={2026}
}