Motion Planning for Physical Robots
Abstract
This is a full-day workshop on development of motion planning algorithms for physical robots. Algorithmic motion planning has been actively studied in robotics and related areas for more than three decades. There is a rich collection of motion planning algorithms based on visibility graphs, algebraic methods, local or potential field techniques, randomized sampling, handling kinodynamic or non-holomonic constraints, etc. Most of these algorithms have been successfully used for CAD/CAM, bioinformatics, computer gaming and other applications. At the same time, advances in manufacturing technologies, sensing and actuator devices have led to the development of powerful robots, including humanoid robots, general-purpose and programmable mobile manipulators such as Willow Garage PR2, Kawada HRP2, Aldebaran Nao, etc. However, there has been limited use or application of motion planning algorithms on these “physical robots” to perform various tasks. This workshop addresses this gap and the issues that arise in terms of developing appropriate motion-planning methods that can be used on current and upcoming robot platforms. These include addressing issues related to dynamic constraints, modeling uncertainty, as well as real-time computation of motion strategies on the robotic platform. The program consists of invited talks from leading researchers in the area and a panel.
Workshop Schedule
May 09, 2011
Room 5H
Session WS-M-7
Tentative Program
Morning Session: 8:30 – 12:35
8:30: Welcome and Introduction
8:35: Motion Planning for Physical Acting Robots (Slides) (Jean Paul Laumond)
8:50 – 9:25: Hierarchical Task and Motion Planning (Slides) (Tomas Lozano-Perez)
9:25 – 10:00: Legs, Hands, and Wheels: Bridging the Gap Between High-level Planning and Low-level Control (Slides) (James Kuffner)
10:00 – 10:35: Online Generation of Kinodynamic Trajectories (Slides) (Wolfram Burgard)
10:35 – 10:50: Coffee Break
10:50 – 11:25: Planning Sequences of Motion Primitives (Slides) (Florent Lamiraux)
11:25 – 12:00: Real-Time Motion Planning and Handling Model Uncertainty (Slides) (Dinesh Manocha)
12:00 – 12:35: Plan-based Movement Control for Everyday Manipulation (Slides) (Michael Beetz)
Lunch: 12:35 – 13:45
Afternoon Session: 13:45 – 17:00
13:45 – 14:20: Planning Humanoid Multi-contact Dynamic Motions Using Optimization Techniques (Slides) (Abderrahmane Kheddar)
14:20– 14:55: Hierarchical Planning for Robot Manipulation (Slides) (Bhaskara Marthi)
14:55 – 15:30: Humanoid Grasping and Manipulation in the Real World (Slides) (Tamim Asfour)
15:30 – 15:45: Coffee Break
15:45 – 16:20: : Departing Kinematics: Reconciling Geometric Planners with Physical Manipulation (Slides) (Siddhartha Srinivasa)
16:20 – 16:55: Panel (led by Jean-Paul Laumond)
Organizers
- Jean-Paul Laumond, LAAS-CNRS, 7 Avenue du Colonel Roche, 31077 Toulouse, France; jpl@laas.fr, +33(0)561336347
- Dinesh Manocha, Department of Computer Science, University of North Carolina, Chapel Hill, NC 27599-3175, USA; dm@cs.umd.edu , +1(919)962-1749
Presenters & Abstracts
Tamim Asfour (Humanoids and Intelligence Labs, Karlsruhe Institute of Technology)
Title: Humanoid Grasping and Manipulation in the Real World
Abstract: One of the major tasks robots have to perform in our personal and professional environments is grasping and manipulating objects encountered in such environments. We present our work on grasp and motion planning for single and dual arm tasks on humanoid robots in the real world. In this context, we present grasp and motion planners which are able to deal with known and partially unknown objects. In particular, we present a parameter-free algorithm based on RRT for exact motion planning and a new grasp planner which makes use of object symmetry properties to generate grasp candidates. In addition, we discuss how learning from human observation contributes to high-dimensional grasp planning problems and present a new low-dimensional grasp representation, which exploits fingertip trajectories in the task space as well as finger movement synergies to facilitate grasp action formulation in a goal-directed manner. The successful execution of generated grasping motions on the ARMAR-III humanoid robots is achieved by a multisensory approach to deal with uncertainties in perception and action execution.
Michael Beetz (Technische Universität München)
Title: Plan-based Movement Control for Everyday Manipulation
Abstract: Many possible application domains for autonomous mobile robot
manipulation can be characterized as everyday manipulation: the robots
have to perform the same kinds of manipulation tasks, with the same
objects, in the same environment under similar conditions over and
over again. Humans performing everyday manipulation generate --- maybe surprisingly --- very stereotypical movement patterns for
navigation, reaching, etc. Considering these issues more thoroughly we
can see a number of advantages of stereotypicality of movement
patterns: stereotypical movements are faster to learn, better to optimize, easier to diagnose, better to adapt, and easier to read.
In the talk I will describe and discuss ongoing research in the Intelligent Autonomous Systems group at the Technische Universität München that is targeted at the realization of a movement control
system for a household robot that aims at generating and exploiting stereotypical movement patterns where possible.
Wolfram Burgard (University of Freiburg)
Title: Online Generation of Kinodynamic Trajectories
Abstract: We present an approach to kinodynamic motion planning for mobile robots. From a collision-free straight-line path we generate a spline-based approximation and compute a velocity profile that considers the kinodynamic platform constraints. The shape of the initial trajectory is iteratively optimized to reduce user-defined costs like traversal time or energy efficiency.
As a key contribution we propose a novel spline-based path representation that realizes curvature-continuous joins of trajectory segments, even if trajectory pieces are replaced to react to unmapped obstacles. It provides a compact set of meaningful higher-level parameters to the optimization, e.g., the location of waypoints and wideness of curves.
Omnidirectional holonomic robots can rotate independently from translation which substantially increases their action space. To exploit their capabilities we extend our framework in order to specify where on the path the robot rotates. This enables rotational behaviors in the full spectrum between turns on the spot only and uninterrupted rotation along the path. Furthermore, the orientational behavior can be varied between the possibly conservative settings of the initial path and minimized overall rotation.
Experiments carried out on real robots with differential, synchro, and holonomic drives demonstrate our system's capability to generate smooth, precise, and predictable motion. They have been conducted on predefined benchmark routes, in difficult situations like narrow passages, and even in populated environments.
Abderrahmane Kheddar (CNRS-AIST JRL)
Title: Planning Humanoid Multi-contact Dynamic Motions Using Optimization Techniques
Abstract: Using optimization to generate motion planning has been examined since the 1980s. But since optimization techniques are rather local and time consuming, they have been long surpassed by probabilistic planning approaches that are rather global. Geometric planning methods have solved a wide variety of robotic problems and have even been extended to planning by taking into account bounds on first and second derivatives of the motion (kinodynamics). Yet, their application to under-actuated and high degrees of freedom hyper-redundant robots (such as humanoids) achieving various tasks under hard constraints (perceptual tasks, motion state and torque limits, equilibrium, collision avoidance, etc.) with whole body dynamic motion is not straightforward. Optimization techniques are mostly local, but advances in mathematics of optimization generated a considerable collection of know-how with on-the-shelf robust solvers that can be efficiently used in robotic motion planning. In this talk, I will focus in planning multi-contact non-gaited motions and also on our experience using optimization techniques to plan motions for the humanoid robot HRP-2 in performing extreme tasks that are not possible to obtain with state-of-the-art planning techniques. I will also discuss challenges that remain to be handled under an optimization framework, among which is the critical issue of speeding the calculation process. I will also address what breakthroughs would result in robotics if real-time optimization can be doable.
Joint work with (K. Bouyarmane, S. Lengagne, A. Escande, S. Miossec, E. Yoshida)
James Kuffner (CMU/Google)
Title: Legs, Hands, and Wheels: Bridging the Gap Between High-level Planning and Low-level Control
Abstract: Getting motion planning algorithms to work well in practice on physical robots in real-world environments can be challenging. Sensor noise, modeling uncertainly, as well as computational bottlenekcs can severely limit their practical application and performance. This talk will discuss some of these challenges in the context of motion planning for legged robots , mobile manipulators, and autonomous cars. Specifically, some techniques aimed at bridging the gap between high-level motion planning and lower-level prospects for robot autonomy as it relates to search-based AI and planning will be discussed.
Florent Lamiraux (LAAS-CNRS)
Title: Planning Sequences of Motion Primitives
Abstract: Motion planning for legged humanoid robots in indoor environments has given rise to a lot of research work for the past decade. The main challenges reside in non-linear constraints arising from the contact between the robot feet and the ground on the one hand and in producing dynamically balanced motions for locomotion on the other hand.
Today, a lot has been accomplished from a strict motion planning point of view. For instance, state of the art algorithms make a humanoid robot able to plan a motion manipulating objects like a door, a window or a drawer. However, executing these motions in real environments remains a difficult issue. Sensing limitations are undoubtedly a strong reason for the difficulty, but the lack of a formal framework defining in a general way the nature of a motion for a robot is another big obstacle.
In this talk, I will discuss an approach toward planning "motion primitives" instead of "motions" as curves in the configuration space. A motion primitive contains some information about the control process that will execute the motion using on-board sensors.
Jean-Paul Laumond (LAAS-CNRS)
Title: Motion Planning for Physical Acting Robots
Abstract: The geometric formulation of motion planning as a Piano Mover Problem (PMP) allows translating the problem of moving bodies in real worlds into the problem of moving points in configuration spaces. The problem of planning is then solved via combinatorial geometric data structures. Numerous efficient approaches are available today. They benefit from probabilistic search paradigms and hardware-based algorithms.
The purpose of this introductory talk is to consider the adequacy of the PMP formulation to planning for robots acting in the physical world.
We know how difficult it is to extend PMP with sensor information. On the opposite side, extensions of the PMP allow for solving some intricate manipulation planning problems with purely geometric algorithms. Also, PMP algorithms can be used as geometric operators evaluating predicates inside logic-based search algorithms. Finally, we will see how humanoid robots open questions on the relationship between motion and action, and between discrete geometric data structures and numerical optimization techniques.
So, Motion Planning remains an exciting research topic as soon as real-sized scenarios on real robotic platforms renew the seminal PMP.
Tomas Lozano-Perez (MIT)
Title: Hierarchical Task and Motion Planning (Joint work with Leslie Pack Kaelbling)
Abstract: As robots become more physically robust and capable of sophisticated sensing, navigation, and manipulation, we want them to carry out increasingly complex tasks. A robot that helps in a household must plan over the scale of hours or days, considering abstract features such as the desires of the occupants of the house, as well as detailed models that support locating and getting objects. The complexity of such tasks derives from very long time horizons, large numbers of objects to be considered and manipulated, and fundamental uncertainty about properties and locations of those objects. This type of planning requires a tight integration of task and motion planning.
In this talk we outline an approach to the integration of task planning and motion planning that has the following key properties: It is aggressively hierarchical; it makes choices and commits to them in a top-down fashion in an attempt to limit the length of plans that need to be constructed, and thereby exponentially decrease the amount of search required. It operates on detailed, continuous geometric representations and does not require a-priori discretization of the state or action spaces.
Dinesh Manocha (University of North Carolina at Chapel Hill)
Title: Real-Time Motion Planning and Handling Model Uncertainty
Abstract: In this talk, we address two main challenges with respect to motion planning for physical robots. The first issue is real-time motion planning, which can compute collision
free paths for robots in dynamic scenes without any preprocessing. We describe a new framework that utilizes the computational capabilities of $400 commodity GPUs (graphics processing units) for real-time high DOF planning. As compared to prior sample-based planners, we observe one to two orders of magnitude improvement in performance using our many-core sample-based motion planning algorithms. The second challenge is in terms of dealing with noisy point cloud data that is generated from partial observations of the environment. We present a new probabilistic collision detection algorithm that can handle environments with uncertainty. Our approach reformulates the collision detection problem between two objects as a two-class classification problem, where points of different objects belong to different classes. The collision probability is directly related to the separability of the corresponding two-class problem, which can be elegantly and efficiently solved using support vector machines (SVMs). We highlight the performance on point clouds captured using PR2 sensors as well as synthetic data sets.
Bhaskara Marthi (Willow Garage)
Title: Hierarchical Planning for Robot Manipulation
Abstract: I will describe an algorithm for combined task and motion planning. The algorithm is based on a novel decomposition of the set of plans based on the hierarchical structure of the problem. It is able to efficiently find plans that achieve near-optimality while respecting constraints at both the task and geometric levels. Results of the application of this algorithm to motion planning tasks will be presented.
Siddhartha Srinivasa (Intel Labs)
Title: Departing Kinematics: Reconciling Geometric Planners with Physical Manipulation
Abstract: Humans use a remarkable set of strategies to manipulate objects in clutter.
We pickup, push, slide, and sweep with our hands and our arms to rearrange
clutter surrounding our primary task. But our robots treat the world like the
Tower of Hanoi --- moving with pick-and-place actions and fearful to
interact with it with anything but rigid grasps. I will outline our ongoing efforts
towards reconciling geometric planners with physical manipulation.
I will introduce our framework for planning in clutter that uses a library
of actions inspired by human strategies and derived
analytically from the mechanics of pushing. The
framework reduces the problem to one of combinatorial search, derives efficient heuristics,
and demonstrates planning times on the order of seconds. With the
extra functionality, our planner succeeds where traditional grasp planners
fail, and works under high uncertainty by utilizing the funneling effect of pushing.
List of topics
Robot motion planning; task-level planning; real-time planning; motion planning with constraints; humanoid robots; constrained manipulation planning; whole-body planning; planning with uncertainty
Motivation and objectives
The main goal of this workshop is to bring together different researchers working on various aspects of planning problems, including path planning, task planning, whole-body planning and address issues in developing appropriate techniques for current robots (including humanoids) and programmable mobile manipulators. Traditionally, these topics have been studies in isolation, and there appears to be relatively less work in integrating these techniques for physical robots. Moreover, there is relatively less work on developing appropriate motion planning algorithms that can be used to perform autonomous tasks using current robots. The recent development of programmable mobile manipulators along with open-source operating systems and environments such as ROS seem to open up many new possibilities. This workshop is expected to address these gaps and bring together leading researchers with varying backgrounds, who can address different aspects of these problems. Furthermore, most of these presenters have also worked on porting the appropriate planning techniques to physical robots, can share their experiences and also point out major open issues.
Primary/secondary audience
Researchers and developers working in motion planning, task planning, humanoid robots, mobile manipulators, hardware manufacturers, and other aspects of planning.