000 | 01697naa a2200193 4500 | ||
---|---|---|---|
008 | 160926b xxu||||| |||| 00| 0 eng d | ||
100 | _aYogeswaran, Mohan | ||
245 | _aReinforcement Learning: Exploration - Exploitation Dilema in Multi Agent Foraging Task | ||
260 |
_a _b _c |
||
300 | _a49 (3) Jul-Sep 2012, 223-236p. | ||
520 | _aThe exploration–exploitation dilemma has been an unresolved issue within the framework of multi-agent reinforcement learning. The agents have to explore in order to improve the state which potentially yields higher rewards in the future or exploit the state that yields the highest reward based on the existing knowledge. Pure exploration degrades the agent’s learning but increases the flexibility of the agent to adapt in a dynamic environment. On the other hand pure exploitation drives the agent’s learning process to locally optimal solutions. Various learning policies have been studied to address this issue. This paper presents critical experimental results on a number of learning policies reported in the open literatures. Learning policies namely greedy, ξ-greedy, Boltzmann Distribution (BD), Simulated Annealing (SA), Probability Matching (PM) and Optimistic Initial Values (OIV) are implemented to study on their performances on a multi-agent foraging-task modelled. Based on the numerical results that were obtained, the performances of the learning policies are discussed. | ||
650 | _aReinforcement | ||
650 | _aQ Learnign | ||
650 | _aLearning Policies | ||
700 | _aExploration - Exploitation Dilema | ||
773 | 0 |
_d _oB-2508 _tBV- Opsearch (Jan - Dec 2012) |
|
906 | _aGeneral Management | ||
942 |
_2ddc _c8 |
||
999 |
_c90357 _d90357 |