Robot uses computational model to manipulate articulated object
Figure: Instead of modelling articulated structures such as robots and objects as trees of rigid bodies joined by abstract types of connections, we propose understanding them purely as their differentiable forward kinematics and constraints.

Service robots in the future need to execute abstract instructions such as "fetch the milk from the fridge". To translate such instructions into actionable plans, robots require in-depth background knowledge. With regards to interactions with doors and drawers, robots require articulation models that they can use for state estimation and motion planning. Existing articulation model frameworks take an abstracted approach to model building, which requires additional background knowledge to construct mathematical models for computation. In this paper, we introduce a novel framework that uses symbolic mathematical expressions to model articulated objects. We provide a theoretical description of this framework, and the operations that are supported by its models, and introduce an architecture to exchange our models in robotic applications, making them as flexible as any other environmental observation. To demonstrate the utility of our approach, we employ our practical implementation Kineverse for solving common robotics tasks from state estimation and mobile manipulation, and use it further in real-world mobile robot manipulation.

How Does It Work?

We present a novel framework for modelling articulated objects for the robotics context. Existing frameworks model articulated objects descriptively, e.g. The door is hinged to the frame. These descriptions are useful for Human operators, however, they do not encode how computational models might be derived from them, which are necessary for any robotic manipulation of articulated objects.

Our proposed framework models a scene of articulated objects as a tuple \(\mathcal{A} = (\mathcal{D}, \mathcal{C})\), where \(\mathcal{D}\) is a named set of differentiable forward kinematic expressions of the objects' parts' and \(\mathcal{C}\) is a set of constraints that restricts the configuration space of the modeled objects. We introduced specific extensions to the concept of a gradient which allows us to model non-holonomic kinematics, as well as encode heuristic gradients of boolean expressions. Alongside this theoretical model, we introduce a network architecture to exchange these models among the components of an active robotic system. With this architecture, models of articulated objects become as flexible as camera images or laserscans, while all components, be it manipulation or state estimation, are able to work off of the same model and get notify when it changes. Our architecture consists of a central model server which stores the current main model, and consumer, and modifier clients which either simply process the model, or introduce changes to it.

We hold that our framework is able to enable the development of model-agnostic robotic skills, i.e. skills that transfer seamlessly between different robots and objects, without differentiating between their different articulations.

For a more complete overview of our method, we refer you to our paper and to our video submission.

Articulation Model
Figure: Our models consist of a set of named forward kinematic expressions, modeled as 4x4 homogeneous transforms, and a set of three-tuples \(c = (\iota, \upsilon, \varphi)\) encoding the inequality \(\iota \le \varphi \le \upsilon\).
Network Communication
Figure: Overview of our networked communications architecture. A central model server stores the articulation model of the robots' environment. Clients can either interact with the server directly, or process updates published by the model server.

Experimental Evaluation - Videos

We evaluate our proposed framework by applying it to common robotic tasks such as state estimation, and manipulation, both in simulation and on a real robotic system. While we describe the exact evaluation in our paper, here we only portray a gallery of video results of it.

We use a model-agnostic state estimator the estimate the configuration space pose of a kitchen from 6D observations of its parts.

We propose a model-agnostic controller for grasped manipulation of articulated objects. The controller transfers between different robots and objects.

We propose a model-agnostic controller for manipulating articulated objects through pushing.

We deploy our controllers on a real robotic system and perform real-world mobile robotic manipulation.


The implementation of our framework, called Kineverse, can be found on GitHub. The code for our experiments can be found at GitHub as well. Both packages are published for academic usage under the GPLv3 license. If this license is not permissible for your purposes, please contact the authors.


Adrian Röfer, Georg Bartels, Wolfram Burgard, Abhinav Valada, Michael Beetz
Kineverse: A Symbolic Articulation Model Framework for Model-Agnostic Mobile Manipulation
IEEE Robotics and Automation Letters (RA-L), 2022.

(Pdf) (Bibtex)