Using a robot to rearrange objects on a table is a relatively easy task to solve, since the top view of the environment allows the robot to see all objects it has to deal with.
One of the main challenges for mobile robots in semi-structured and possibly cluttered environments like supermarkets is to optimize the navigation quality, which for this research is about the safety and performance trade-off.
Imagine this: you are walking through the cereal aisle of your local supermarket, when you see a pack of ice-cream slowly thawing into a small pool of sadness. You don’t want to see that, right?
For my thesis I was awarded a 9.0, meaning that I graduated cum laude for the Master Vehicle Engineering with a specialization in Perception and Modeling.
Robots perform flawless demos in the controlled environment under human supervision but tend to fail in real world especially when deployed for long period. With increasing complexity, a greater number of components are added to the system thereby increasing the probability of it being faulty.
The thesis focuses mainly on answering two main questions: 1. Can we enhance the capability of a robotic system using situational awareness and self-adaptation? 2. Can explicit knowledge representation and automated reasoning about the robot’s internal components be exploited to obtain an intrinsically fault tolerant and reliable systems?
I am developing a solution which can provide a robotic system with capabilities such as situational-awareness and self-adaptation and allow the robot to perform deductive reasoning over the knowledge enhancing reliability and justifiable decision making. The main contribution of this thesis work is: • Self-adaptive situational-aware mobile robot localisation framework to perform run-time reconfiguration • Fault detection, isolation and recovery for localisation system using deductive reasoning • Reusable and generalized knowledge schema for ROS based systems.
For robots to increase the efficiency in stores, it is important that they can navigate autonomously in a store environment, in which changes occur on top of the previously known environment.
Paul van Houtum
Sensor technologies have triggered my interest during my time studying at the TU Delft. Being able to perform my master’s thesis research in this field, while using an actual robot at AIRLab is great! The research lies in the scope of classifying objects towards their function (affordance), rather than their type, using perception sensors.
Mobile manipulators are becoming more applicable in dynamic environments. Algorithms for task planning and execution for autonomous robots are needed to be more adaptive, reactive and fault tolerant against unforeseen contingencies.
Anne van der Star
Since I’ve made a cucumber picking robot in my minor robotics, I have a special interest in working with robotics and biological products. Therefore, I have been working at the TU Delft AgTech Institute during my study en joined AIRLab for my thesis project.
My project is about Bayesian neural networks that use posterior distributions for weights instead of precise numbers, so that uncertainty can be reflected in the prediction process. At the same time, model compression and architecture search can be employed due to the information of posterior distribution.
My goal is to make teaching robots how to perform interaction tasks more intuitive and safe. The particular focus of my work is on multi-modal interaction tasks in which both a force interaction as well as a position control have to be executed. In order to achieve this, I employ learning from demonstration coupled with further machine learning techniques.
I am working on a dynamic and stochastic vehicle routing problem that focuses on routing a fleet of vehicles in the last-mile delivery for retail. In particular, such problems deal with many complexities and require decisions to be made based on incomplete information. My goal would be to improve assignment + routing performance by eliminating the adverse effects of incomplete information using suitable anticipatory techniques.
The important part of having a robot in a retail environment is not so much that it has to work perfectly. The important part is that people actually feel comfortable having the robot around. The goal of my research is to create a path planning algorithm that adapts to the environment, with emphasis on the people in the environment. I believe that with a combination of self-adaptation and machine learning you can create a robot that mimics socially acceptable retail store behaviour.
In order to use robots for stocking the shelves, it’s important to know what the robot sees and how the robot interprets the things it sees. One of the relevant things a robot sees in a supermarket is the products. We (humans) can very quickly determine what product we see based on multiple characteristics (material, text, colour, size, environment, etc.). A robot cannot do that out of itself, it needs algorithms to determine which product it sees, this branch of computer vision is called “object recognition”.
Imagine a supermarket in which robots aid you during your shopping experience. Imagine a retail warehouse in which a robot operates hand in hand with humans and in which we as humans can focus on the interesting and creative part of the task.
Moving from industrial applications closer to humans, the tasks of a robot will vary with every new request. Machine Learning offers great potential to generalize to such flexible task definitions and mobile manipulators provide a great platform for differently dimensioned goals.
In my master thesis, I am researching Reinforcement Learning approaches to achieve flexible motion planning tasks by leveraging data from expert demonstrations.
It is expected of robots to interact more richly with the world. Which is why us roboticists are no longer content with simply detecting and recognizing objects in images. Instead, what is desired is higher-level understanding and reasoning about complete dynamic 3D scenes.
When automating or setting up an automated workflow, the first questions that arise are the following: How many robots do I need to fulfill the whole task? What kind of robots do I need? I am implementing a solution to these questions into the last-mile delivery for retail. I make a fleet design for grocery delivery, taking into account the customer satisfaction and the amount of traffic on the road.
If you would place a mammal in an unknown environment it would immediately start exploring. After sensing the environment it would interact with the objects in it, to learn for example which ones are heavy and which ones are light. In my research am creating a robot that mimics this learning method. The robot is given a task in an unknown environment. After every interaction with its surroundings it stores the new knowledge learned. When given a task, can the robot determine a list of subtasks with a set of plausible controllers leading to success?