Essential Maintenance: All Authorea-powered sites will be offline 9am-10am EDT Tuesday 28 May
and 11pm-1am EDT Tuesday 28-Wednesday 29 May. We apologise for any inconvenience.

loading page

Exploiting Expertise of Non-Expert and Diverse Agents in Social Bandit Learning: A Free Energy Approach
  • +2
  • Erfan Mirzaei,
  • Alireza Tavakoli,
  • Seyed Pooya Shariatpanahi,
  • Reshad Hosseini,
  • Majid Nili Ahmadabadi
Erfan Mirzaei
Alireza Tavakoli
Seyed Pooya Shariatpanahi
Reshad Hosseini
Majid Nili Ahmadabadi

Corresponding Author:[email protected]

Author Profile

Abstract

Personalized AI-based services involve a population of individual reinforcement learning agents. However, most reinforcement learning algorithms focus on harnessing individual learning and fail to leverage the social learning capabilities commonly exhibited by humans and animals. Social learning integrates individual experience with observing others' behavior, presenting opportunities for improved learning outcomes. In this study, we focus on a social bandit learning scenario where a social agent observes other agents' actions without knowledge of their rewards. The agents independently pursue their own rewards without any explicit motivation for teaching one another. We propose a free energy-based social bandit learning algorithm over policy space, where the social agent evaluates others' expertise levels without resorting to any oracle or social norms. Accordingly, the social agent integrates its direct experiences in the environment and others' estimated policies. The theoretical convergence of our algorithm to the optimal policy is proven. Empirical evaluations validate the superiority of our social learning method over alternative approaches in diverse scenarios. Our algorithm strategically identifies the relevant agents, even in the presence of random or sub-optimal agents, and skillfully exploiters their behavioral information. Extra to societies including expert agents, in the presence of relevant but non-expert agents, our algorithm significantly enhances individual learning performance, where most related methods fail. Importantly, it maintains logarithmic regret.
24 Mar 2024Submitted to TechRxiv
30 Mar 2024Published in TechRxiv