Abstract
Cognitive science findings showed that humans are able to create simulated mental environments based on their episodic memory and use such environment for prospecting, planning, and learning. Such capabilities could enhance current robotic systems, allowing them predict the output of a plan before actually performing the action on the real world. It also allow robots to use this simulated world to learn new tasks and improve its current ones using Reinforcement Learning approaches. In this work, we propose a semantic modeling framework, which is able to express intrinsic semantic knowledge in order to better represent robots, places and objects, while also being a memory-efficient alternative to classic mapping solutions. We show that such data can be used to automatically generate a complete mental simulation allowing robots to simulate themselves and other modeled agents into known environments. This simulations allows robots to perform autonomous learning and planning without the need of human-tailored models.
| Original language | English |
|---|---|
| Pages (from-to) | 65-73 |
| Number of pages | 9 |
| Journal | CEUR Workshop Proceedings |
| Volume | 2487 |
| State | Published - 2019 |
| Event | Joint 1st International Workshop on the Semantic Descriptor, Semantic Modeling and Mapping for Humanlike Perception and Navigation of Mobile Robots toward Large Scale Long-Term Autonomy and the 3rd International Workshop on the Applications of Knowledge Representation and Semantic Technologies in Robotics, SDMM 2019 and AnSWeR 2019 - Macau, China Duration: 8 Nov 2019 → … |
Fingerprint
Dive into the research topics of 'Mental simulation for autonomous learning and planning based on triplet ontological semantic model'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver