Designing the fuzzy controllers by using evolutionary algorithms and reinforcement learning is an important subject to control the robots. In the present article, some methods to solve reinforcement fuzzy control problems are studied. All these methods have been established by combining Fuzzy-Q Learning with an optimization algorithm. These algorithms include the Ant colony, Bee Colony and Artificial Bee Colony optimization algorithms. Comparing these algorithms on solving Track Backer-Upper problem –a reinforcement fuzzy control problem– shows that Artificial Bee Colony Optimization algorithm has the best efficiency in combining with fuzzy- Q Learning.
Published in |
American Journal of Software Engineering and Applications (Volume 5, Issue 3-1)
This article belongs to the Special Issue Advances in Computer Science and Information Technology in Developing Countries |
DOI | 10.11648/j.ajsea.s.2016050301.16 |
Page(s) | 25-29 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2017. Published by Science Publishing Group |
Mobile Robot, Fuzzy-Qlearning, Ant Colony Optimization-Fuzzy Q Learning, Bee Colony Optimization-Fuzzy-Q Learning, Artificial Bee Colony-Fuzzy Q Learning
[1] | H. R̒. Berenji, “Fuzzy Q-learning for generalization of reinforcement,” IEEE Int. Conf. Fuzzy Syst, 1996. |
[2] | P. Y. Glorennec, “Fuzzy Q-learning and dynamic fuzzy Q-learning,” IEEE Int. Conf. Fuzzy Syst., Orlando, 1994. |
[3] | P. Y. Glorennec, L. Jouffe, “Fuzzy Q-learning,” IEEE Int. Conf. Fuzzy Syst, 1997. |
[4] | L. Jouffe, “Fuzzy, inference system learning by reinforcement methods̕,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., Vol. 28 (3), pp. 338–355, 1998. |
[5] | C. F. Juang, “Ant Colony Optimization Incorporated With Fuzzy Q-Learning for Reinforcement Fuzzy Control,” IEEE Transactions on systems, man, and cybernetics—part a: systems and humans, Vol. 39, May 2009. |
[6] | L. P. Wong, M. Yoke Hean Low, C. S. Chong, “A Bee Colony Optimization Algorithm for Traveling Salesman Problem,” Second Asia International Conference on Modelling & Simulation, IEEE, 2008. |
[7] | L. P. Wong, Y. H. Malcolm Low, C. S. Chong, “Bee Colony Optimization with Local Search for Traveling Salesman Problem,” 2008. |
[8] | M. Servet Kiran, H. Iscan, M. Gounduz, “The analysis of discrete artificial bee colony algorithm with neighborhood operator on traveling salesman problem,” Neural Comput and Applic, 2013. |
[9] | W. li. Xiang, M. Qing An, “An efficient and robust artificial bee colony algorithm for numerical optimization,” Computers and Operations Research, pp. 1256–1265, 2013. |
[10] | S. Saeed, A. Niknafs, “Artificial Bee Colony-Fuzzy Q Learning for Reinforcement Fuzzy Control (Truck Backer-Upper Control Problem),” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Vol. 24, No. 1, pp. 123-136, 2016. |
APA Style
Sima Saeed, Aliakbar Niknafs. (2017). Optimization Algorithms Incorporated Fuzzy Q-Learning for Solving Mobile Robot Control Problems. American Journal of Software Engineering and Applications, 5(3-1), 25-29. https://doi.org/10.11648/j.ajsea.s.2016050301.16
ACS Style
Sima Saeed; Aliakbar Niknafs. Optimization Algorithms Incorporated Fuzzy Q-Learning for Solving Mobile Robot Control Problems. Am. J. Softw. Eng. Appl. 2017, 5(3-1), 25-29. doi: 10.11648/j.ajsea.s.2016050301.16
@article{10.11648/j.ajsea.s.2016050301.16, author = {Sima Saeed and Aliakbar Niknafs}, title = {Optimization Algorithms Incorporated Fuzzy Q-Learning for Solving Mobile Robot Control Problems}, journal = {American Journal of Software Engineering and Applications}, volume = {5}, number = {3-1}, pages = {25-29}, doi = {10.11648/j.ajsea.s.2016050301.16}, url = {https://doi.org/10.11648/j.ajsea.s.2016050301.16}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajsea.s.2016050301.16}, abstract = {Designing the fuzzy controllers by using evolutionary algorithms and reinforcement learning is an important subject to control the robots. In the present article, some methods to solve reinforcement fuzzy control problems are studied. All these methods have been established by combining Fuzzy-Q Learning with an optimization algorithm. These algorithms include the Ant colony, Bee Colony and Artificial Bee Colony optimization algorithms. Comparing these algorithms on solving Track Backer-Upper problem –a reinforcement fuzzy control problem– shows that Artificial Bee Colony Optimization algorithm has the best efficiency in combining with fuzzy- Q Learning.}, year = {2017} }
TY - JOUR T1 - Optimization Algorithms Incorporated Fuzzy Q-Learning for Solving Mobile Robot Control Problems AU - Sima Saeed AU - Aliakbar Niknafs Y1 - 2017/08/21 PY - 2017 N1 - https://doi.org/10.11648/j.ajsea.s.2016050301.16 DO - 10.11648/j.ajsea.s.2016050301.16 T2 - American Journal of Software Engineering and Applications JF - American Journal of Software Engineering and Applications JO - American Journal of Software Engineering and Applications SP - 25 EP - 29 PB - Science Publishing Group SN - 2327-249X UR - https://doi.org/10.11648/j.ajsea.s.2016050301.16 AB - Designing the fuzzy controllers by using evolutionary algorithms and reinforcement learning is an important subject to control the robots. In the present article, some methods to solve reinforcement fuzzy control problems are studied. All these methods have been established by combining Fuzzy-Q Learning with an optimization algorithm. These algorithms include the Ant colony, Bee Colony and Artificial Bee Colony optimization algorithms. Comparing these algorithms on solving Track Backer-Upper problem –a reinforcement fuzzy control problem– shows that Artificial Bee Colony Optimization algorithm has the best efficiency in combining with fuzzy- Q Learning. VL - 5 IS - 3-1 ER -