In his captivating lecture, Frank Guerin examines the question about the integration between robot vision and a deeper understanding of tasks. As robot grasping is still an unsolved problem, he explores why and how human perception of objects is relevant to manipulation and explains what "transferrable toddler skills" are. The lecture is suitable for beginners.
From small to complex, from robot vacuum cleaner to self-driving car: every robotic system needs some sort of perceptual capabilities in order to perceive information from its environment and to understand how it can manipulate it. Perception can come in many forms. Tim Patten gives a highly interesting introduction on how robots deal with object identification: what is it? (recognition), what type is it? (classification), where is it? (object detection), and how do I manipulate it? (grasping). The talk is suitable for beginners.
Michael Beetz provides an educational introduction to CRAM, Cognitive Robot Abstract Machine. How can we write a robot control program where the robot receives instructions for the performance of a task and is able to produce the behavior neccessary to accomplish the task? This simple question is not fully answered yet as there is still an information gab between instruction and body motion that has to be filled in a semantically meaningful manner. One way is simplifying perception tasks and implementing motion contrains. Follow Michael Beetz's interesting approach to metacognition.
Part 1: In his first noteworthy lecture, Animesh Garg presents his vision of building intelligent robotic assistants that learn with the same amount of efficiency and generality than humans do through learning algorithms, particularly in robot manipulation. Humans learn through instruction or imitation and can adapt to new situations by drawing from experience. The goal is to have robotic systems recognize new objects in new environments autonomously (diversity) and enable them to do things they were not trained to do by using long-term reasoning (complexity). Animesh Garg introduces the approach to "learning with structured inductive bias and priors", i.e. the ability to generalize beyond the training data.
Part 2: Rachid Alamis' second presentation continues with a dive into the exciting topic of Human-Robot Interaction (HRI). When humans interact with each other, for example by giving a pen to someone else, they exchange (verbal/non-verbal) signals. Rachid Alami gives a very good, short introduction to human-human interaction before exploring the challenges of adopting joint action between humans for human-robot interaction. He presents multiple "decisional ingredients" for interactive autonomous robot assistants.
Part 2: In his second lecture, Kei Okada discusses episodic memory. It describes the collection of past personal experiences in comparison to semantic memory that refers to the general knowledge about the world humans accumulate throughout their life. In order to achieve a goal like tidying up objects, a robot has to rely on acquired knowledge about where to find objects and what to do with them. In a very educational way, Kei Okada further addresses the concepts of task instantiation and experience accumulation mechanism, meaning the formation of patterns and routines in task completion.
Part 1: In the first part of his captivating lecture, Rachid Alami discusses decisional abilities required for Human-Robot Interaction (HRI) and Human-Robot Collaboration in particular. The challenge is to develop and build cognitive and interactive abilities to allow robots to perform collaborative tasks with humans, not for humans. The first part centers on the introduction to Human-Robot Joint Actions and the problems of combining tasks planning (what to do) with motion planning (how to do it), especially for grasping, and how they can be solved.
Part 1: Kei Okada starts his first talk with a short introduction on the history of humanoid robotics research at JSK and presents various former projects such as HARP (Human Autonomous Robot Project) . He then continues to explore knowledge representation of everyday activities and knowledge-based object localization before concluding with motion imitation for robots. The compact and thorough presentation is suitable for beginners.
Follow Michael Beetz' talk on the exciting topic of digital twin knowledge bases. The term digital twins refers to virtual, AI-based images of physical objects in the real world. It is an emerging technology and plays a crucial role for the Industry 4.0 and the digitization of manufacturing in several domains. In retail, for example, digital twins show an exact digital replica of the store and warehouse and the location of each product. In his comprehensive talk, Michael Beetz focuses on the aspect of knowledge representation. He proposes a hybrid reasoning system that couples simulation-based with symbolic reasoning and wants to demonstrate the gain of such a combination.
Michael Suppa from Roboception GmbH gives useful insights into robot perception applications in real-world environments. Roboception provides 3D vision hardware and software solutions that enable industrial robotic systems to perceive their environments in real-time. His talk introduces sensing principles, confidence and errror modelling, as well as pose estimation and SLAM (simultaneous localization and mapping). He also lists the requirements for real-world perception and manipluation systems in industrial environments. His informative and application-oriented talk is suitable for beginners.