loading page

DYNAMIC SELECTION OF P-NORM IN LINEAR ADAPTIVE FILTERING VIA ONLINE KERNEL-BASED REINFORCEMENT LEARNING
  • Minh Vu ,
  • Yuki Akiyama ,
  • Konstantinos Slavakis
Minh Vu
Tokyo Institute of Technology

Corresponding Author:[email protected]

Author Profile
Yuki Akiyama
Author Profile
Konstantinos Slavakis
Author Profile

Abstract

This study addresses the problem of selecting dynamically, at each time instance, the “optimal” p-norm to combat outliers in linear adaptive filtering without any knowledge on the potentially timevarying probability density function of the outliers. To this end, an online and data-driven framework is designed via kernel-based reinforcement learning (KBRL). Novel Bellman mappings on reproducing kernel Hilbert spaces (RKHSs) are introduced that need no knowledge on transition probabilities of Markov decision processes, and are nonexpansive with respect to the underlying Hilbertian norm. An approximate policy-iteration framework is finally offered via the introduction of a finite-dimensional affine superset of the fixed-point set of the proposed Bellman mappings. The well-known “curse of dimensionality” in RKHSs is addressed by building a basis of vectors via an approximate linear dependency criterion. Numerical tests on synthetic data demonstrate that the proposed framework selects always the “optimal” p-norm for the outlier scenario at hand, outperforming at the same time several non-RL and KBRL schemes.
——-
© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.