Learning on real robots from experience and simple user feedback

  1. Quintía Vidal, Pablo
  2. Iglesias Rodríguez, Roberto
  3. Rodríguez González, Miguel Ángel
  4. Regueiro, Carlos V.
Revista:
JoPha: Journal of Physical Agents

ISSN: 1888-0258

Año de publicación: 2013

Título del ejemplar: Special issue on advances on physical agents

Volumen: 7

Número: 1

Páginas: 8

Tipo: Artículo

DOI: 10.14198/JOPHA.2013.7.1.08 DIALNET GOOGLE SCHOLAR lock_openRUA editor

Otras publicaciones en: JoPha: Journal of Physical Agents

Resumen

In this article we describe a novel algorithm that allows fast and continuous learning on a physical robot working in a real environment. The learning process is never stopped and new knowledge gained from robot-environment interactions can be incorporated into the controller at any time. Our algorithm lets a human observer control the reward given to the robot, hence avoiding the burden of defining a reward function. Despite the highly-non-deterministic reinforcement, through the experimental results described in this paper, we will see how the learning processes are never stopped and are able to achieve fast robot adaptation to the diversity of different situations the robot encounters while it is moving in several environments