Quarterly (winter, spring, summer, fall)
128 pp. per issue
7 x 10, illustrated
ISSN
1064-5462
E-ISSN
1530-9185
2014 Impact factor:
1.39

Artificial Life

Winter 2004, Vol. 10, No. 1, Pages 65-81
(doi: 10.1162/106454604322875913)
© 2004 Massachusetts Institute of Technology
Learning Obstacle Avoidance with an Operant Behavior Model
Article PDF (2.63 MB)
Abstract

Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.