 Research article
 Open Access
 Published:
Universal nonlinear filtering using Feynman path integrals II: the continuouscontinuous model with additive noise
PMC Physics A volume 3, Article number: 2 (2009)
Abstract
In this paper, the Feynman path integral formulation of the continuouscontinuous filtering problem, a fundamental problem of applied science, is investigated for the case when the noise in the signal and measurement model is Gaussian and additive. It is shown that it leads to an independent and selfcontained analysis and solution of the problem. A consequence of this analysis is the configuration space Feynman path integral formula for the conditional probability density that manifests the underlying physics of the problem. A corollary of the path integral formula is the Yau algorithm that has been shown to have excellent numerical properties. The Feynman path integral formulation is shown to lead to practical and implementable algorithms. In particular, the solution of the Yau partial differential equation is reduced to one of function computation and integration.
PACS Codes:02.50.Ey, 02.50.Fz, 05.10.Gg, 89.90.+n, 93E10, 93E11
1 Introduction
1.1 Motivation
The fundamental dynamical laws of physics, both classical and quantum mechanical, are described in terms of variables continuous in time. The continuous nature of the dynamical variables has been verified at all length scales probed so far, even though the relevant dynamical variables, and the fundamental laws of physics, are very different in the microscopic and macroscopic realms. In practical situations, one often deals with macroscopic objects whose state is phenomenologically welldescribed by classical deterministic laws modified by external disturbances that can be modelled as random noise, or Langevin equations. Even when there is no underlying fundamental dynamical law, the Langevin equation provides an effective description of the state variables in many applications. It is therefore natural to consider the problem of the evolution of a state of a signal of interest described by a Langevin equation called the state process.
When the state model noise is Gaussian (or more generally multiplicatively Gaussian) the state process is a Markov process. Since the process is stochastic, the state process is completely characterized by a probability density function. The FokkerPlanckKolmogorov foward equation (FPKfe) describes the evolution of this probability density function (or equivalently, the transition probability density function) and is the complete solution of the state evolution problem.
However, in many applications the signal, or state variables, cannot be directly observed. Instead, what is measured is a nonlinearly related stochastic process called the measurement process. The measurement process can often be modelled by yet another continuous stochastic dynamical system called the measurement model. In other words, the observations, or measurements, are discretetime samples drawn from a different Langevin equation called the measurement process.
The conditional probability density function of the state variables, given the observations, is the complete solution of the filtering problem. This is because it contains all the probabilistic information about the state process that is in the measurements and in the initial condition [1]. This is the Bayesian approach, i.e., the a priori initial data about the signal process contained in the initial probability distribution of the state is incorporated into the solution. Given the conditional probability density, optimality may be defined under various criterion. Usually, the conditional mean, which is the least meansquares estimate, is studied due to its richness in results and mathematical elegance. The solution of the optimal nonlinear filtering problem is termed universal, if the initial distribution can be arbitrary.
1.2 Fundamental Sochastic Filtering Results
When the state and measurement processes are linear, the linear filtering problem was solved by Kalman and Bucy [2, 3]. The celebrated Kalman filter has been successfully applied to a large number of problems in many different areas.
Nevertheless, the Kalman filter suffers from some major limitations. The Kalman filter is not optimal even for the linear filtering case if the initial distribution is not Gaussian. It may still be optimal for a linear system under certain criteria, such as minimum mean square error, but not a general criterion. In other words, the Kalman filter is not a universal optimal filter, even when the filtering problem is linear. Secondly, the Kalman filter cannot be an optimal solution for the general nonlinear filtering problem since it assumes that the signal and measurement models are linear. The extended Kalman filter (EKF), obtained by applying the Kalman filter to a linearized model, cannot be a reliable solution, in general. Thirdly, even when the EKF estimates the state well in some cases, it gives no reliable indication of the accuracy of the state estimate, i.e., the conditional variance is unreliable. Finally, the Kalman filter assumes that the conditional probability distribution is Gaussian, which is a very restrictive assumption; for instance, it rules out the possibility of a multimodal conditional probability distribution.
The continuouscontinous nonlinear filtering problem (i.e., continuoustime state and measurement stochastic processes) was studied in [4–6] and [7]. This led to a stochastic differential equation, called the Kushner equation, for the conditional probability density in the continuouscontinuous filtering problem. It was noted in [8, 9], and [10] that the stochastic differential equation satisfied by the unnormalized conditional probability density, called the DuncanMortensenZakai (DMZ) equation, is linear and hence considerably simpler than the Kushner equation. The robust DMZ equation, a partial differential equation (PDE) that follows from the DMZ equation via a gauge transformation, was derived in [11] and [12].
A disadvantage of the robust DMZ equation is that the coefficients depend on the measurements. Thus, one does not know the PDE to solve prior to the measurements. As a result, realtime solution is impossible. A fundamental advance was made in tackling the general nonlinear filtering problem by ST. Yau and Stephen Yau. In [13], it was proved that the robust DMZ equation is equivalent to a partial differential equation that is independent of the measurements, which is referred to as the Yau Equation (YYe) in this paper. Specifically, the measurements only enter as initial condition at each measurement step. Thus, no online solution of a PDE is needed; all PDE computations can be done offline.
However, numerical solution of partial differential equations presents several challenges. A naïve discretization may not be convergent, i.e., the approximation error may not vanish as the grid size is reduced. Alternatively, when the discretization spacing is decreased, it may tend to a different equation, i.e., be inconsistent. Furthermore, the numerical method may be unstable. Finally, since the solution of the YYe is a probability density, it must be positive which may not be guaranteed by the discretization.
A different approach to solving the PDE was taken in [14] and [15]. An explicit expression for the fundamental solution of the YYe as an ordinary integral was derived. It was shown that the formal solution to the YYe may be written down as an ordinary, but somewhat complicated, multidimensional integral, with an infinite series as the integrand. In addition, an estimate of the time needed for the solution to converge to the true solution was presented.
1.3 Outline of the Paper
In this paper, the (Euclidean) Feynman path integral (FPI) formulation is employed to tackle the continuouscontinuous nonlinear filtering problem. Specifically, phrasing the stochastic filtering problem in a language common in physics, the solution of the stochastic filtering problem is presented. In particular, no other result in filtering theory (such as the DMZ equation, the robust DMZ equation, etc.) is used. The path integral formulation leads to a path integral formula for the transition probability density for the general additive noise case. A corollary of the FPI formalism is the path integral formula for the fundamental solution of the YYe and the Yau algorithm – a fundamental result of nonlinear filtering theory. It is noted that this paper provides a detailed derivation of results that were used in [16].
The following point needs to be emphasized to readers familiar with the discussion of standard filtering theory – the FPI is different from the FeynmanKǎc path integral. In filtering theory literature, it is the FeynmanKǎc formalism that is often used. The FeynmanKǎc formulation is a rigorous formulation and has led to several rigorous results in filtering theory. However, in spite of considerable effort it has not been proven to be directly useful in the development of reliable practical algorithms with desirable numerical properties. It also obscures the physics of the problem.
In contrast, it is shown that the FPI leads to formulas that are eminently suitable for numerical implementation. It also provides a simple and clear physical picture. Many path integral manipulations have no counterpart in the FeynmanKǎc approach. Finally, the theoretical insights provided by the FPI are highly valuable, as evidenced by numerous examples in modern theoretical physics (see, for instance, [17]), and shall be employed in subsequent papers.
The outline of this paper is as follows. In the following section, the filtering problem is reformulated in a language common in physics. In Section 3, the path integral formula for the transition probability density is derived for the general additive noise case. The Yau algorithm is then derived in the following section. In Sections 5 and 6 some conceptual remarks and numerical examples are presented. The conclusions are presented in Section 7. In Appendix 1, aspects of continuouscontinuous filtering are reviewed.
For more details on the path integral methods, see any modern text on quantum field theory, such as [17], and especially [18] which discusses application of FPI to the study of stochastic processes.
2 The continuous filtering problem: a physical reformulation
In this section, the filtering problem is stated in a language commonly used in theoretical physics.
2.1 Langevin Equation
Consider an ensemble of systems with state variables described by the Langevin equation:
Here, x(t) ∈ ℝ^{n}, the drift f(x(t), t) ∈ ℝ^{n}, the diffusion vielbein e(x(t), t) ∈ ℝ^{n×p}, and ν(t) is δcorrelated with covariance matrix Q(t) ∈ ℝ^{p×p}. When the diffusion vielbein is independent of the state x(t), the noise is termed additive. It is the additive noise that is studied here, since it enables the use of functional methods common in quantum field theory.
Due to the random noise, each system leads to a different vector x(t) that depends on time. Although only one realization of the stochastic process is ever observed, it is meaningful to speak about an ensemble average. For fixed times t = t_{ i }, i = 1, 2, ..., r, the probability density of finding the random vector x(t) in the (ndimensional) interval x_{ i }≤ x(t_{ i }) ≤ x_{ i }+ dx_{ i }(1 ≤ i ≤ r) is given by
where x_{ i }is an ndimensional column vector and ⟨·⟩ denotes averaging with respect to the signal model noise ν(t). The complete information on the random vector x(t) is contained in the infinite hierarchy of such probability densities. The quantity of interest here is the conditional probability density
Now the process described by the Langevin equation with δcorrelated Langevin force is a Markov process, i.e., the conditional probability density depends only on the value at the immediate previous time:
p(t_{ n }, x_{ n }t_{n1}, x_{n1}; ...; t_{1}, x_{1}) = p(t_{ n }, x_{ n }t_{n1}, x_{n1}).
It can be shown that the transition probability satisfies the FokkerPlanckKolmogorov forward equation (FPKfe) (see, for instance, [19])
Finally, the Gaussian noise process ν(t) can be represented by the following path integral measure
where ν(t) is a real vector for each t. This leads to a configuration space FPI formula for the fundamental solution of the FPKfe (see, for instance, [18]). The path integral formula for the fundamental solution for the FPkfe is applied to the continuousdiscrete filtering problem with additive (state model) noise in [20, 21].
2.2 The ContinuousContinuous Filtering Problem
Similarly, the continuouscontinuous model can be written as follows:
Here, y(t) ∈ ℝ^{m}, the measurement model drift h(x(t), t) ∈ ℝ^{m}, the diffusion vielbein n(x(t), t) ∈ ℝ^{m×q}, and μ(t) is δcorrelated with covariance matrix W(t) ∈ ℝ^{q×q}.
Thus, in continuouscontinuous filtering, the continuoustime measurement stochastic process needs to be incorporated as well. Consider another ensemble of systems with state variables whose time evolution is governed by the measurement process. The measurement noise means that each system in the ensemble leads to a different timedependent vector y(t). Thus, even though only one realization of the measurement stochastic process is observed, it is still meaningful to talk about an ensemble average of the measurement process (in addition to one over the state process). Thus, the quantity of interest in continuouscontinuous filtering is the conditional probability density
where ⟨·⟩_{ μ }denotes averaging with respect to the measurement noise μ(t). A crucial difference between the state and measurement stochastic process is that, unlike the state, the measurement samples are known.
Note that the conditional transition probability density is the complete solution to the continuouscontinuous filtering problem, since if the initial distribution is u(t_{i1}, x'Y_{i1}), where Y_{i1 }is the set of all measurements prior to t_{i1}, then the evolved conditional probability distribution is
u(t, xY_{ i }) = ∫ P(t, x; y_{ i }t_{i1}, x'; y_{i1}) u (t_{i1}, x'Y_{i1}) {d^{n}x'}.
In the following sections, the path integral formulas for P(t_{2}, x_{2}; y_{2}t_{1}, x_{1}; y_{1}) are derived. In Section 4 it shall be shown that it leads to the Yau algorithm. It shall be shown that the YYe plays the same role here that the FPKfe does in continuousdiscrete filtering.
3 Path integral formula for the conditional transition probability density
The path integral formula for the conditional transition probability density shall now be derived using functional methods. Note that implicit in the use of these formal functional methods is the use of the Feynman convention, or symmetric discretization for the drift.
As noted in Section 2, the transition probability density is computed by averaging over the signal and measurement ensembles, i.e.,
From the assumptions of the signal and measurement noise processes, it is evident that
The Jacobian J follows from the functional derivative of the Langevin equation:
Hence,
where $\mathcal{N}$ is an irrelevant constant, or,
The Jacobian J_{ y }is trivial (as the measurement model drift is yindependent) and can be absorbed into the measure.
It is noteworthy that J is not trivial. In quantum field theory, nontrivial Jacobians usually imply that there is an anomaly, as in the case of chiral anomalies in gauge theories. However, there is no reason for an anomaly here; after all, this is not even a quantum field theoretical system. The puzzle is resolved by noting that path integral anomalies in quantum field theory arise from the "multiplicative" part in the change of variables (i.e., ψ (x) → U(x) ψ (x)). In contrast, the nontrivial Jacobian term here arises from the additive term; the multiplicative term does not contribute to the Jacobian, in accordance with expectations.
Thus, so far,
Using the Fourier integral version of the delta function
Integrating over ν(t, t_{0}), and μ(t, t_{0}) leads to
Integrating over λ(t, t_{0}) and κ(t, t_{0}), it is clear that
where the action is given by
4 Derivation of the Yau algorithm
Observe that the path integral formula derived in the previous section is over both the state and measurement variables. In this section, we shall show that in some cases it is possible to reduce the result to a path integral over the state variables only. This has the advantage of being precomputable since it is independent of measurements. It shall be shown to lead to the Yau algorithm.
4.1 Sampled Continuous Measurements
Suppose measurements are available at time t_{i1 }and t_{ i }, and that the conditional transition probability density at time t_{ i }is desired. Further, assume that there are no measurements available between t_{i1 }and t_{ i }. The general path integral formula (Equation 19) cannot be simplified unless some additional assumptions are made.
First, consider the case where e(t)Q(t)e^{T}(t) and n(t)W(t)n^{T}(t) are ħ_{ ν }I_{n×n}and ħ_{ μ }I_{m×m}and h(x(t)) is not explicitly time dependent. Then, the contribution to the action due to the measurement process is
The quantity of interest is the state and we seek to integrate over the measurement variables.
Now the first term is independent of the state variables. The second term can be added to the action term that is independent of y(t). It remains to investigate the contribution of the third term:
There are two issues in this evaluation. Firstly, this can be evaluated via the usual integration by parts, but it is important to note that it is valid only for symmetric discretization. Secondly, since the measurements are sampled, there are two possible interpretations when t_{ i } t_{i1 }= ϵ, where ϵ is an infinitesimal:
This leads to two possibilities:
Finally, the residual Gaussian path integral over y(t) can be ignored as it is independent of the state. Therefore, the path integral formula simplifies to
where
and
Secondly, if the conditions above are relaxed to allowing explicit time dependence of the drift term in the state model, i.e., f(x(t), t), and and C = (n(t)W(t)n^{T}(t)), then the path integral formula becomes
where
and the action is given by
Finally, consider the case when e(t)Q(t)e(t) is timeindependent and invertible, but otherwise arbitrary, and C = (n(t)W(t)n^{T}(t)). Note that C is a constant, symmetric matrix and assumed to be invertible. The path integral formula is given by Equations 28 and 29 and with the action S(t_{i1}, t_{ i }) is given by
4.2 The Yau Algorithm
In Section 2 it was noted that if v_{i1 }(t_{i1}, x_{i1}) is the conditional probability density at time t_{i1}, then the conditional probability density at time t_{ i }is given by
v_{ i }(t_{ i }, x_{ i }) = ∫ P (t_{ i }, x_{ i }; y_{ i }t_{i1}, x_{i1}; y_{i1}) v_{i1 }(t_{i1}, x_{i1}) {d^{n}x_{i1}}.
For simplicity, let us first consider the case e(t)Q(t)e^{T}(t) and n(t)W(t)n^{T}(t) are ħ_{ ν }I_{n×n}and ħ_{ μ }I_{m×m}, and h(x(t)) is not explicitly time dependent. Then
When t_{ i } t_{i1 }is small
and Equation 33 becomes
Following the method originally used by Feynman, the partial differential equation satisfied by $\tilde{P}$ (t, xt_{0}, x_{0}) may be derived (see, for instance, [20]). In particular, $\tilde{P}$ (t_{ i }, x_{ i }t_{i1}, x_{i1}) is the fundamental solution of the Yau Equation(YYe):
This implies that v_{ i }(t_{ i }, x) is the solution at t_{ i }of
This is precisely the Yau algorithm.
Likewise, it is straightforward to see that for the general case studied in Section 4.1, the Yau algorithm is extended to this case as follows: v_{ i }(t_{ i }, x_{ i }) is the solution at t_{ i }of the PDE
This is a straightforward generalization of the YYe to the state model with explicit time dependence.
5 Some remarks
Following are some remarks on the FPI solution of the filtering problem:

Note that the FPI formulation has given a complete and selfcontained solution of the continuouscontinuous filtering problem. For instance, the DMZ equation or its variants were not used as an input. On the contrary, the FPI formula naturally led us to the YYe and the Yau algorithm.

Note also that the DMZ equation (and variants) cannot be solved reliably in realtime as the various approximations assume that drift is bounded and require solution of a stochastic PDE that depend on measurements. In contrast, the general FPI formula presented in Section 3 can potentially be used for an efficient and reliable realtime solution. This is because the measurement time interval is usually small so that the simplest approximation of the path integral (termed the DiracFeynman approximation, see Section 6) is adequate. In contrast, in quantum mechanics and quantum field theory one is interested in the large time case.

Unlike the Yau algorithm, the PI formula is valid even for the general timedependent case for large measurement time interval. In other words, one can compute the conditional transition probablity density using the conventional methods (see, for instance, [22]).

The YYe can be viewed as a local expression of the path integral formula. That is, a path integral is a global object, while the PDE is a local one.

In this paper, the signal noise and measurement noise are assumed to be additive. This is a stronger condition than the orthogonalilty of the diffusion vielbein assumed in [13]. The solution for the general case has been presented in [16].

It is noted that other algorithms can also be solved using the FPI formulas with obvious changes. Usually, they require the solution of the FPKfe, which corresponds to the case h(x) = 0. Note that the FPKfe arises naturally in the solution of the continuousdiscrete filtering problem [20]. Of course, as noted in Appendix 1, it would be unnecessary since the Yau algorithm has the best numerical properties. What is interesting to note is that the path integral formula naturally leads to the algorithm with the best numerical properties.

The Yau algorithm also has the form of the "prediction" and "correction" part, as in continuousdiscrete filtering [20]. Specifically, the prediction part is the solution of the YYe, whereas the correction part is the multiplicative factor in the initial condition. However, it is crucial to note that the the prediction part contains the measurement model drift. In contrast, the measurement model plays no role in the prediction part in continuousdiscrete filtering.
6 Examples
6.1 The DiracFeynman Approximation
From the discussion in the previous sections, it is evident that computing the path integral requires computing the Lagrangian, L, defined by S = ∫ dt L(x, $\dot{x}$, t). The simplest (and crudest) approximation (for the case e(t)Q(t)e^{T}(t) = ħ_{ ν }I_{n×n}and n(t) W (t)n^{T}(t) = ħ_{ μ }I_{m×m}) is to use the following approximation that is valid for infinitesimal time step:
This is the DiracFeynman (DF) approximation. The algorithm that follows from applying the DF approximation to the Yau algorithm is the DiracFeynmanYau (DFY) algorithm.
6.2 Example 1
As an example, consider the following continuouscontinuous filtering model that has been studied in [23] (and [24])
The Lagrangian for this model is given by
The parameters chosen were as in the reference. Specifically, a = 1.2, b = 3, σ_{ x }= 0.3, spatial grid spacing Δx = 0.01 and extent [1.5, 1.5], temporal grid spacing Δt = 0.01 with 200 time steps.
In the first set, the measurement noise was set as σ_{ y }= 0.05. Figure 1 shows a sample of state and measurement processes. The conditional mean, computed using the DFY algorithm, is plotted in Figure 2. Since there was negligible difference in performance between the premeasurement and postmeasurement forms, only the former was employed. Also plotted are 2σ limits. The conditional mean and variance were computed from the computed conditional probability density. The fact that the target was mostly within the 2σ limits of the conditional mean shows that the tracking performance of this algorithm is good.
In the next set, the measurement noise was set as σ_{ y }= 0.0125. For this "small noise" case, most of the algorithms studied in [23] failed. In Figure 3 is plotted a sample of signal and measurement processes. The conditional mean and the 2σ limits computed using the DFY algorithm for this instance is plotted in Figure 4.
It is seen that good tracking performance is maintained for the small noise case even when the crudest path integral approximation is used in the Yau algorithm.
6.3 Example 2: Cubic Sensor Problem
The cubic sensor problem is defined by the following signal and measurement model:
It is a wellstudied nonlinear filtering problem because it is one of the simplest examples of a filtering problem that is not finite dimensional (see, for instance, [25] and references therein).
For the simulation of the cubic sensor problem, the following model parameters were chosen (as in [25])
The EKF is a suboptimal filter which approximates the conditional probability by a Gaussian. For the cubic sensor problem the EKF is given by
The Yau Lagrangian for the cubic sensor problem is
The DF approximation the Yau Lagrangian is
Figure 5 shows the performance of the DFY algorithm. Specifically, the conditional mean along with the two standard deviation bounds computed using the computed conditional probability density is plotted. Observe that the EKF fails completely in this case. As noted in [25], this is because the EKF considers only the first two moments (which vanish here); it is the fourth central moment that plays a crucial role in this example (for the chosen initial condition). Also, note that the state is within the 2σ region for most of the time. This shows that, unlike the EKF, the path integral filter has a reliable error analysis.
After an initial period, the performance of the path integral approximation is seen to be excellent and comparable to that obtained using PDE methods in [25]. However, the crucial point is that the path integral solution is equally simple for the higherdimensional case with more complicated models (e.g., colored noise), whereas a PDE solution would be significantly harder, if not impossible, to implement in realtime.
6.4 Comments
It is remarkable to note that very good performance is obtained using the crudest approximation. Of course, when the time step is large, it will fail (unlike the path integral formula itself). However, the practical situation is that the time steps are often small. Therefore, the DF approximation may be adequate in most cases.
The implementation of this method is trivial. The contrast with other methods, such as those studied in [23], is striking. For instance, many of those methods require offline computation of complicated partial differential equations with uncertain numerical properties.
The results obtained in this paper used single timestep. More accuracy can be obtained quite simply using multiple time steps. Also, the computation of the transition probability density can be done offline, but the online computation was not an onerous burden for the examples studied here.
It is important to note also that the transition probability density matrix (or tensor in the general case) is sparse (sparsity determined by ħ_{ μ }, ħ_{ ν }). This is of great importance in higher dimensional filtering problems because

Sparse matrix storage requirements are small,

The relevant transition probability density matrix elements can be computed based on the conditional density in the previous step, and

Sparse matrix computations are very fast.
Note that unlike some other approximation techniques studied in [23], the conditional probability density is obviously always positive (provided, of course, that the initial distribution is positive).
Finally, a comment on measures of performance. Note that a good tracker is one that furnishes not only a good estimate of a state but also provides a reliable measure of the quality of the estimate. For the linear, Gaussian case, the conditional mean and the variance are Gaussian and the Kalman estimates are optimal and provide a complete description. However, for the general nonlinear case, such a concise description is not possible. For instance, the conditional probability density may be highly skewed. It may be multimodal, in which case the conditional mean is not a very meaningful quantity. A more general measure is to indicate domains of "significant" probability mass; a good filtering solution is then one that guarantees that the state is in the region of significant probability mass with a very high confidence. For the purposes of the paper, the conditional variance was chosen.
7 Conclusion
In this paper, the formal path integral solution to the continuouscontinuous nonlinear filtering problem has been presented. The solution is universal, i.e., the initial distribution may be arbitrary. Since the path integral measure is manifestly positive, positivity is maintained if the initial distribution is positive.
A path integral formulation has several advantages. It is well known that Feynman path integrals have led to theoretical insights in other areas including quantum mechanics, quantum field theory and even pure mathematics. It is demonstrated in this paper that it is possible to express the fundamental solution of the YYe in terms of Feynman path integrals. Finally, Feynman path integrals are very suitable for numerical implementation. Practical path integral filtering techniques, especially for solving large dimensional problems, will be presented in subsequent papers.
Appendix 1 continuouscontinuous filtering and the Yau equation
In this section, the main results of (continuouscontinuous) nonlinear filtering theory are summarized. For the general case (e.g., not the finitedimensional filter case) and from a practical point of view the most important results are the YYe and the Yau algorithm.
A.1 The ContinuousContinuous Model
The signal and observation model in continuouscontinuous filtering is the following:
Here x, v, y, and w are ℝ^{n}, ℝ^{p}, ℝ^{m}and ℝ^{q}valued stochastic processes, respectively, and e(x(t), t) ∈ ℝ^{n×p}and n(t) ∈ ℝ^{m×q}. These are defined in the Itô sense. The Brownian processes v and w are assumed to be independent with p×p and q ×q covariance matrices Q(t) and W(t), respectively. We denote n(t)W (t)n^{T}(t) by R(t), a m × m matrix. Also, f is referred to as the drift, e as the diffusion vielbein, and eQe^{T}as the diffusion matrix.
In this section, some of the relevant work on continuouscontinuous filtering is summarized. Hence, it is assumed that n = p, m = q, f and h are vectorvalued C^{∞} smooth functions, e(x, t) is an orthogonal matrixvalued C^{∞} smooth function, Q(t) is a n × n identity matrix, and n(t) and R(t) are m × m identity matrices. No explicit time dependence is assumed in the model.
A.2 The DMZ Stochastic Differential Equation
The unnormalized conditional probability density, σ (t, x) of the state given the observations {Y (s): 0 ≤ s ≤ t} satisfies the DMZ equation:
Here
where ${\mathcal{L}}_{i}$ is the zerodegree differential operator of multiplication by h_{ i }(x), i = 1, ..., m, σ_{0} is the probability density of the initial time t_{0}, and
The DMZ equation is to be interpreted in the Stratanovich sense. Note that
Hence,
and
A.3 The Robust DMZ Partial Differential Equation
Following [11] and [12] introduce a new unnormalized density
Under this transformation, the DMZ SDE is transformed into the following timevarying PDE
This is called the robust DMZ equation. Here Δ is the Laplacian. The solution of a PDE when the initial condition is a delta function is called the fundamental solution.
A.4 The Yau Equation
Recently, it was proved that the realtime solution of the general nonlinear filtering problem can be obtained reliably [13, 26]. Let $\mathcal{P}$ = {τ_{0} <τ_{1} < ⋯ <τ_{ k }= τ} be a partition of the time interval [τ_{0}, τ], and let the norm of the partition ${\mathcal{P}}_{k}$ be defined as ${\mathcal{P}}_{k}$ = sup_{1≤i≤k}{τ_{ i } τ_{i1}}. If u_{ l }(t, x) satisfies the equation
in the time interval τ_{l1 }≤ t ≤ τ_{ l }, then the function ${\tilde{u}}_{l}$(t, x) defined as
satisfies the parabolic partial differential equation
in the same time interval. The converse of the statement is also true. In [27], it was also noted that it is sufficient to use the previous observation, i.e., u_{ l }(t, x) satisfies Equation 57 if and only if ${\tilde{u}}_{l}$(t, x) defined as
satisfies Equation 59 in the time interval τ_{l1 }≤ t ≤ τ_{ l }. We refer to Equations 58 (60) and Equation 59 as the postmeasurement (premeasurement) forms of the YYe.
Observe that Equation 57 is obtained by setting y(t) to y(τ_{ l }) in Equation 56. It was proved that the solution of Equation 57 approximates very well the solution of the robust DMZ equation (Equation 56), i.e., it converges to u(t, x) in both pointwise sense and L^{2} sense. Thus, solving Equation 56 is equivalent to solving Equation 59. Finally,
Thus, the solution of the YYe (as ${\mathcal{P}}_{k}$ → 0) is the desired unnormalized conditional probability density.
Observe that when h(x) = 0, it is simply the FPKfe. However, unlike the FPKfe, the YYe does not satisfy the current conservation condition, i.e., the righthand term is not a total divergence. This means that it does not conserve probability. This fundamental difference is traced to the fact that the FPKfe evolves the normalized probability density (and preserves the normalization), while the YYe evolves the unnormalized conditional probability density. Therefore, this distinction is made between the two equations in this paper.
A.5 The Yau Algorithm
We may summarize the realtime algorithm, based on both the pre and postmeasurement forms of the YYe, of Yau as follows. Suppose measurements are available at times
⋯ <τ_{0} <τ_{1} <τ_{2} < ⋯ <τ_{ k }= τ.
We seek the solution u_{ i }(t, x), which is the solution of the robust DMZ equation. Let the initial distribution be u(τ_{0}, x) = σ_{0} (x). Then, evolve the initial distribution to the first measurement instant, τ_{1}, using the YYe:
The solution of equation 63 at time τ_{1} is ${\tilde{u}}_{1}$(τ_{1}, x). Note that u_{1} (τ_{1}, x) is given by
Next, solve the YYe to the next measurement instant τ_{2} with initial condition ${\tilde{u}}_{2}$(τ_{1}, x), i.e.,
to obtain ${\tilde{u}}_{2}$(τ_{2}, x). In fact, for i ≥ 2, u_{ i }(τ_{ i }, x) can be computed from ${\tilde{u}}_{i}$(τ_{ i }, x), where ${\tilde{u}}_{i}$(t, x) satisfies the equation