TechRxiv
TechRxiv_YA.pdf (1.33 MB)
Download file

ONLINE AND LIGHTWEIGHT KERNEL-BASED APPROXIMATE POLICY ITERATION FOR DYNAMIC P-NORM LINEAR ADAPTIVE FILTERING

Download (1.33 MB)
preprint
posted on 2022-10-26, 05:35 authored by Yuki AkiyamaYuki Akiyama, Minh Vu, Konstantinos Slavakis

  This paper introduces a solution to the problem of selecting dynamically (online) the ``optimal'' p-norm to combat outliers in linear adaptive filtering without any knowledge on the probability density function of the outliers. The proposed online and data-driven framework is built on kernel-based reinforcement learning (KBRL). To this end, novel Bellman mappings on reproducing kernel Hilbert spaces (RKHSs) are introduced. These mappings do not require any knowledge on transition probabilities of Markov decision processes, and are nonexpansive with respect to the underlying Hilbertian norm. The fixed-point sets of the proposed Bellman mappings are utilized to build an approximate policy-iteration (API) framework for the problem at hand. To address the ``curse of dimensionality'' in RKHSs, random Fourier features are utilized to bound the computational complexity of the API. Numerical tests on synthetic data for several outlier scenarios demonstrate the superior performance of the proposed API framework over several non-RL and KBRL schemes.


----

 © 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. 

History

Email Address of Submitting Author

akiyama.y.am@m.titech.ac.jp

Submitting Author's Institution

Tokyo Institute of Technology

Submitting Author's Country

  • Japan