Two equivalent multi-sensor Kalman ﬁlters with variable delays and intermittent measurements

—Kalman ﬁltering of measurement data from multiple sensors with time-varying delays and missing measurements is considered in this work. Two existing approaches to Kalman ﬁltering with delays are extended by removing some assumptions in order to have equivalent ﬁltering methods and making comparisons between them. The computational loads of the two methods are compared in terms of the average number of ﬂoating point operations required by each method for different system dimensionalities and delay upper bounds. The results show that the superiority of the methods over each other depends on the comparison conditions.

This work aims to extend the first two Kalman filtering approaches mentioned above into equivalent methods such that a comparison between them is made possible. The extensions allow for tackling intermittent measurements from multiple sensors subject to time-varying delays. For the first approach, the filtering method in [9], which is found to be a more general formulation among the existing results, is extended to the case of multi-sensor measurements while modifying the underlying notation. The result is presented as a selfcontained optimal filtering algorithm. The formulation of the second approach is also extended by adding the capabilities to handle missing measurements and simultaneous or missordered arrival of measurement samples from the previous time steps. Then, the computational loads of the two methods are compared for different values of delay upper bound and system's dimentionality by evaluating the average count of floating point operations (flops) required for each case. The results show that each method can overtake the other one in a subset of conditions.
Notation: The set of integers {a, a+1, · · · , b} is denoted by {a..b}. For an ordered set S, a set of elements M i for i ∈ S is denoted as .b}. Vertical and diagonal concatenations of matrices {M i } b i=a are denoted by cat{M i } b i=a and diag{M i } b i=a respectively. The expected value of a random matrix V is denoted by E{V }. A matrix with zero elements is simply denoted as 0 if its dimensions can be inferred from the containing formula. An empty matrix is denoted by 0 0 which is allowed to be concatenated with an arbitrary matrix M as cat{M,

II. PROBLEM STATEMENT
The problem under consideration in this work is to optimally estimate the state of a dynamical system described by in which k is the discrete time step, x k ∈ R nx is the state vector, u k ∈ R nu is the input vector, y i k ∈ R ni is the ith measured output vector for i ∈ {1..m} with m i=1 n i = n y , w k ∈ R nw and v i k ∈ R ni are uncorrelated, zero-mean white Gaussian noise vectors with covariance matrices R i k = E{v i k v i k T } and Q k = E{w k w T k }. The ith output sample y i k is received by the estimator after a time-varying delay defined as It is assumed that the measurement samples are timestamped such that, the delay for each data sample can be calculated from its time-stamp by the estimator. Missing measurements are also assumed to be probable for example due to multirate sampling or communication errors. A missing measurement can be considered as a sample received with infinite delay. If a measurement sample is not missed, it is assumed to be received after a delay bounded by d max such that The optimal estimation of the state x k is given bŷ in which I k is the information set which is available to the estimator at the time step k.
If there are no delays or missing measurements (i.e. d i k = 0), then the optimal estimationx k|k is calculated by the means of the ordinary Kalman filter in the appendix. In presence of the delays or missing measurements, the ordinary Kalman filter cannot be applied. In the following two sections, two equivalent approaches are presented for extension of Kalman filtering to measured data from multiple sensors, with variable delays and missing measurements according to (3).
Remark 1: The problem formulation in this section is capable for tackling multirate sampling systems. For this purpose, it is only needed to virtually assume that d i k = ∞ if no sample is taken from the ith sensor at the time step k.

III. BACKWARD RENOVATION KALMAN FILTER
In the first method, if y i k is received at the time step t, then the state estimator has to repeat the recursive calculations of filter from k to t. To formulate this method, the following notation is introduced for the variables ϕ ∈ {y, v, H, R}.
The set of measurements that are sampled at k and received until t are concatenated into a vectorỹ t k defined as which can be written as in whichṽ t k is a new zero-mean noise vector with covariance Defining the information set I t k as then we have I k k = I k according to (4b) and (5) for ϕ = y.
The optimal estimation of x k in (24) and the corresponding error covariance given the information set I t h for time steps t and h satisfying t ≥ k ≥ h are written aŝ Since I k k = I k , the optimal estimation in (4) satisfieŝ For a given time step t, the ordinary Kalman filer equations in the appendix can be applied to writê By the definition in (5), the variablesφ i,t k for ϕ ∈ {y, H, R} remain constant with respect to t if t ≥ k + d max . Hence, the equations (12) are the same for t ≥ k + d max such that Therefore, at every time step t the estimator only needs to recalculatex t k|k and P t k|k for k ∈ {t − d max ..t} by repeating the recursive filtering calculations in (12). For this purpose, it is needed that the estimator is equipped with data buffers of length d max to storex t k|k , P t k|k for k ∈ {t − d max ..t}. The estimator also needs to keep track of (5) for ϕ ∈ {y, H, R} that are required for repeating the recursive calculations. For this purpose, the set J t which indexes the samples received at t is defined as which according to (3) satisfies The equations in this section can be converted to the Algorithm 1 which gets the newly arrived information at every time step t given by {y i t−j , H i t−j , R i t−j } (j,i)∈Jt and calculateŝ x t|t in (4). For this purpose, the algorithm storesx t k|k and P t k|k for k ∈ {t − d max ..t} in buffersx,P with finite length ∈ {0..d max }. The additional buffersȳ,H,R are also used to store the received samples of y i k and the corresponding H i k and R i k for k ∈ {t−d max ..t} and i ∈ {1..m}. These value are used for repeating the recursive filter calculations between lines 8 and 13 of the algorithm. The non-stored auxiliary variables are denoted using non-italic names. In particular, x and P stand forx t k|k−1 and P t k|k−1 . The value of starts from zero and grows step by step up to d max after which the buffer lengths remain constant. At t = 0 the buffers must be initialized as   In the second method for obtaining the estimation of state in (4), the system (24) is augmented as with the matrix coefficients and the covariance matrix forw k given by The set of samples that are received by the estimator at t can be concatenated to a vector y s k defined as y s k = cat{y i k−j } (j,i)∈J k (20) with J k in (14) which can be written as The system (17a) with output y s k in (21a) is a delay-free system and the ordinary Kalman filter in the appendix can be applied to estimate ξ k in (17b) which includes x k . As a result, the estimationx k|k in (4) is obtained aŝ V. NUMERICAL EXAMPLE In this section the proposed filtering methods are applied to an example system and their computational loads are compared. For this end, the time-dicretized state space model for a chain of n x integrators is considered as in which x k ∈ R nx is the vector of integrator outputs, u k ∈ R is the input, h = 0.05, w k ∈ R nx and v k ∈ R are uncorrelated Gaussian white noises with E{w k w T k } = 0.1I and E{v 2 k } = 0.1. To stabilize the system, a linear quadratic (LQR) state feedback controller u k = Kx k is applied which minimizes the quadratic cost function J = ∞ 0 (x T k x + u 2 k ). The backward renovation filter (BRF) in section III and the augmented system filter (AGF) in section IV are equivalent and produce the same result, since they both generate the state estimation in (4). The simulation results for the state x k of the system (23) and the corresponding estimationx k|k using either of the two filters are plotted in Fig. 1a through Fig. 1c

VI. CONCLUSION
Two existing approaches to Kalman filtering with delays have been extended to equivalent methods for optimal estimation in presence of multiple effects. These effects include timevarying delays in multiple measurement channels, missing measurements (e.g. due to packet losses or multirate sampling), and miss-ordered arrival of data. The extension of first approach which is based on recalculation of the past filtering variables has been presented as an algorithm using finite length memory buffers. The second approach which is based on system augmentation has been also extended to tackle missing measurements and the other effects. The computational loads of the resulting equivalent filtering methods were compared in terms of the flop counts for filtering integrator chains of various lengths. The comparison results suggest that the first method performs better for long delays while the second method can be more efficient for larger systems.

APPENDIX A KALMAN FILTER FOR LINEAR SYSTEMS WITH VARYING
OUTPUT DIMENSIONALITY Consider a dynamical system system described by with x k ∈ R nx , u k ∈ R nu , and y k ∈ R n k where n k varies with k, and uncorrelated, zero-mean white Gaussian noise vectors w k ∈ R nw and v k ∈ R n k with R k = E{v i k v i k T } and Q k = E{w k w T k }. The optimal estimation of x k is defined aŝ with I k = {y i | i ≤ k}. Givenx 0|0 and P 0|0 , it is straightforward to show that the estimationx k|k can be calculated recursively according tô K k = P k|k−1 H T k (H k P k|k−1 H T k + R k ) −1 , (26c) x k|k =x k|k−1 + K k (y k − H kxk|k−1 ), (26d) P k|k = (I − K k H k )P k|k−1 . (26e) If n k = 0 for some k, then H k and K k in (26) become 0×n x and n x ×0 empty matrices respectively, and the product K k H k in (26e) is defined to be a n x × n x zero matrix.