Cognitive Systems & Robotics
Within the cognitive systems group we deal with the research and development of methodologies and infrastructure for next generation service robots. We focus on systematic integration of mobile service robots into domestic environments such as homes, office spaces, etc. in order to assist humans in routine tasks. Such spaces are not static in their composition and are inherently unstructured, thereby posing several challenges in sensing, navigation, manipulation and decision making for robots. Humans, on the other hand, acclimatize themselves to such spaces within no time. This is due to their ability to perceive the state of the environment, use common sense knowledge, and make decisions that enable interactions with the surroundings. In our group, we try to model and transfer these concepts into reactive systems such as service robots; this would impart them with the intelligence required to coexist and collaborate in human spaces.
Furthermore, we also focus on the usability of robot systems in domestic spaces. There is a general conception that robot systems are inherently complex and usable only by experts. This problem is exacerbated in domestic environments where the potential user is likely to have never interacted with robots before. Most robot systems today are instructed by scripting tasks and their parameters in a detailed and explicit manner. On the contrary, conversations - which are technically the instructions exchanged between humans - are to a very large extent underspecified. Nevertheless, they can fulfil such underspecified tasks by inferring implicit information through their common sense reasoning skills backed by audio-visual perception. Taking inspiration from human capabilities we try to make robots intelligent and easy to instruct.
In order to impart intelligence to service robots we employ the methodology of knowledge representation and logical reasoning backed by active perception. Knowledge about sensors, actuators, objects, tasks, capabilities, services, etc. are represented in a semantic form through ontologies. These ontologies also contain common sense concepts that enable inference of implicit information and data. This allows the robot system to make decisions under circumstances where information is either missing or incorrect due to errors in task execution or disturbances from the environment. The robustness of existing technologies for visual perception, navigation and manipulation is also enhanced using the expressed knowledge.
The capability of a robot to navigate in an environment is determined by its mapping ability. Going beyond conventional mapping techniques we work on enriching environment maps with semantic information. This will allow robots to perceive maps not just as a location tool but as a network of 3D entities with semantic and interrelation information. Such data will allow the robot to reason and infer navigation primitives through object level instructions.
Considering that these robot systems are intended to operate in human dwelt environment, they can benefit by communicating with humans. In the presence of ambiguities that cannot be resolved with the available knowledge and sensor data, the robot will need to resort to asking humans for information. In order to facilitate this, we also work on developing human-robot dialogue mechanisms by coupling natural language processing techniques with common sense knowledge.