Content
Topics
The workshop aims to explore different aspects that will allow robots to autonomously manipulate deformable objects with greater ability and generalization. Enabling such manipulation is crucial for a variety of domains and tasks, such as domestic, industrial, and surgical contexts, which involve various forms of deformable objects. However, the complexity of representing and modeling the dynamics of these objects results in the lack of a current unified solution that can be adapted to a wide range of objects. In the past few years, there has also been an increasing interest in applying foundation models to robotic manipulation, including the use of large pre-trained vision models, language models (LLMs), and vision language models (VLMs) for more sample-efficient learning and solving language-conditioned tasks. Additionally, recent advances in imitation learning, reinforcement learning, and 3D representation models have showcased the capability of robots learning to perform more complex, dexterous, and long-horizon tasks. The release of new simulators, datasets, and low-cost robotic hardware is lowering the barrier for reproducible research, benchmarking, and reuse of data. In this workshop, we will encourage discussions on how these recent advances can improve deformable object manipulation. The workshop will focus on, but is not limited to, the following topics for deformable object manipulation:
- Representation and state estimation
- Simulation and modeling
- Transfer from simulation to reality
- Learning to manipulate using data-driven methods such as reinforcement learning and learning from demonstrations
- Perception: state tracking, parameter identification, property detection (e.g. landmarks for garments) and classification, etc.
- Control, visual servoing and planning
- Use of foundation models, such as large vision and language models, and associated large datasets
- Specialized tools, e.g. grippers, and sensors