# Universal nonlinear filtering using Feynman path integrals II: the continuous-continuous model with additive noise

## Abstract

In this paper, the Feynman path integral formulation of the continuous-continuous filtering problem, a fundamental problem of applied science, is investigated for the case when the noise in the signal and measurement model is Gaussian and additive. It is shown that it leads to an independent and self-contained analysis and solution of the problem. A consequence of this analysis is the configuration space Feynman path integral formula for the conditional probability density that manifests the underlying physics of the problem. A corollary of the path integral formula is the Yau algorithm that has been shown to have excellent numerical properties. The Feynman path integral formulation is shown to lead to practical and implementable algorithms. In particular, the solution of the Yau partial differential equation is reduced to one of function computation and integration.

PACS Codes:02.50.Ey, 02.50.Fz, 05.10.Gg, 89.90.+n, 93E10, 93E11

## 1 Introduction

### 1.1 Motivation

The fundamental dynamical laws of physics, both classical and quantum mechanical, are described in terms of variables continuous in time. The continuous nature of the dynamical variables has been verified at all length scales probed so far, even though the relevant dynamical variables, and the fundamental laws of physics, are very different in the microscopic and macroscopic realms. In practical situations, one often deals with macroscopic objects whose state is phenomenologically well-described by classical deterministic laws modified by external disturbances that can be modelled as random noise, or Langevin equations. Even when there is no underlying fundamental dynamical law, the Langevin equation provides an effective description of the state variables in many applications. It is therefore natural to consider the problem of the evolution of a state of a signal of interest described by a Langevin equation called the state process.

When the state model noise is Gaussian (or more generally multiplicatively Gaussian) the state process is a Markov process. Since the process is stochastic, the state process is completely characterized by a probability density function. The Fokker-Planck-Kolmogorov foward equation (FPKfe) describes the evolution of this probability density function (or equivalently, the transition probability density function) and is the complete solution of the state evolution problem.

However, in many applications the signal, or state variables, cannot be directly observed. Instead, what is measured is a nonlinearly related stochastic process called the measurement process. The measurement process can often be modelled by yet another continuous stochastic dynamical system called the measurement model. In other words, the observations, or measurements, are discrete-time samples drawn from a different Langevin equation called the measurement process.

The conditional probability density function of the state variables, given the observations, is the complete solution of the filtering problem. This is because it contains all the probabilistic information about the state process that is in the measurements and in the initial condition . This is the Bayesian approach, i.e., the a priori initial data about the signal process contained in the initial probability distribution of the state is incorporated into the solution. Given the conditional probability density, optimality may be defined under various criterion. Usually, the conditional mean, which is the least mean-squares estimate, is studied due to its richness in results and mathematical elegance. The solution of the optimal nonlinear filtering problem is termed universal, if the initial distribution can be arbitrary.

### 1.2 Fundamental Sochastic Filtering Results

When the state and measurement processes are linear, the linear filtering problem was solved by Kalman and Bucy [2, 3]. The celebrated Kalman filter has been successfully applied to a large number of problems in many different areas.

Nevertheless, the Kalman filter suffers from some major limitations. The Kalman filter is not optimal even for the linear filtering case if the initial distribution is not Gaussian. It may still be optimal for a linear system under certain criteria, such as minimum mean square error, but not a general criterion. In other words, the Kalman filter is not a universal optimal filter, even when the filtering problem is linear. Secondly, the Kalman filter cannot be an optimal solution for the general nonlinear filtering problem since it assumes that the signal and measurement models are linear. The extended Kalman filter (EKF), obtained by applying the Kalman filter to a linearized model, cannot be a reliable solution, in general. Thirdly, even when the EKF estimates the state well in some cases, it gives no reliable indication of the accuracy of the state estimate, i.e., the conditional variance is unreliable. Finally, the Kalman filter assumes that the conditional probability distribution is Gaussian, which is a very restrictive assumption; for instance, it rules out the possibility of a multi-modal conditional probability distribution.

The continuous-continous nonlinear filtering problem (i.e., continuous-time state and measurement stochastic processes) was studied in  and . This led to a stochastic differential equation, called the Kushner equation, for the conditional probability density in the continuous-continuous filtering problem. It was noted in [8, 9], and  that the stochastic differential equation satisfied by the unnormalized conditional probability density, called the Duncan-Mortensen-Zakai (DMZ) equation, is linear and hence considerably simpler than the Kushner equation. The robust DMZ equation, a partial differential equation (PDE) that follows from the DMZ equation via a gauge transformation, was derived in  and .

A disadvantage of the robust DMZ equation is that the coefficients depend on the measurements. Thus, one does not know the PDE to solve prior to the measurements. As a result, real-time solution is impossible. A fundamental advance was made in tackling the general nonlinear filtering problem by S-T. Yau and Stephen Yau. In , it was proved that the robust DMZ equation is equivalent to a partial differential equation that is independent of the measurements, which is referred to as the Yau Equation (YYe) in this paper. Specifically, the measurements only enter as initial condition at each measurement step. Thus, no on-line solution of a PDE is needed; all PDE computations can be done off-line.

However, numerical solution of partial differential equations presents several challenges. A naïve discretization may not be convergent, i.e., the approximation error may not vanish as the grid size is reduced. Alternatively, when the discretization spacing is decreased, it may tend to a different equation, i.e., be inconsistent. Furthermore, the numerical method may be unstable. Finally, since the solution of the YYe is a probability density, it must be positive which may not be guaranteed by the discretization.

A different approach to solving the PDE was taken in  and . An explicit expression for the fundamental solution of the YYe as an ordinary integral was derived. It was shown that the formal solution to the YYe may be written down as an ordinary, but somewhat complicated, multi-dimensional integral, with an infinite series as the integrand. In addition, an estimate of the time needed for the solution to converge to the true solution was presented.

### 1.3 Outline of the Paper

In this paper, the (Euclidean) Feynman path integral (FPI) formulation is employed to tackle the continuous-continuous nonlinear filtering problem. Specifically, phrasing the stochastic filtering problem in a language common in physics, the solution of the stochastic filtering problem is presented. In particular, no other result in filtering theory (such as the DMZ equation, the robust DMZ equation, etc.) is used. The path integral formulation leads to a path integral formula for the transition probability density for the general additive noise case. A corollary of the FPI formalism is the path integral formula for the fundamental solution of the YYe and the Yau algorithm – a fundamental result of nonlinear filtering theory. It is noted that this paper provides a detailed derivation of results that were used in .

The following point needs to be emphasized to readers familiar with the discussion of standard filtering theory – the FPI is different from the Feynman-Kǎc path integral. In filtering theory literature, it is the Feynman-Kǎc formalism that is often used. The Feynman-Kǎc formulation is a rigorous formulation and has led to several rigorous results in filtering theory. However, in spite of considerable effort it has not been proven to be directly useful in the development of reliable practical algorithms with desirable numerical properties. It also obscures the physics of the problem.

In contrast, it is shown that the FPI leads to formulas that are eminently suitable for numerical implementation. It also provides a simple and clear physical picture. Many path integral manipulations have no counterpart in the Feynman-Kǎc approach. Finally, the theoretical insights provided by the FPI are highly valuable, as evidenced by numerous examples in modern theoretical physics (see, for instance, ), and shall be employed in subsequent papers.

The outline of this paper is as follows. In the following section, the filtering problem is reformulated in a language common in physics. In Section 3, the path integral formula for the transition probability density is derived for the general additive noise case. The Yau algorithm is then derived in the following section. In Sections 5 and 6 some conceptual remarks and numerical examples are presented. The conclusions are presented in Section 7. In Appendix 1, aspects of continuous-continuous filtering are reviewed.

For more details on the path integral methods, see any modern text on quantum field theory, such as , and especially  which discusses application of FPI to the study of stochastic processes.

## 2 The continuous filtering problem: a physical reformulation

In this section, the filtering problem is stated in a language commonly used in theoretical physics.

### 2.1 Langevin Equation

Consider an ensemble of systems with state variables described by the Langevin equation:

$\begin{array}{ccc}\stackrel{˙}{x}\left(t\right)=f\left(x\left(t\right),t\right)+e\left(x\left(t\right),t\right)\nu \left(t\right),& x\left(0\right)={x}_{0},& Q\left(t\right)\in {ℝ}^{p×p}\end{array}.$
(1)

Here, x(t) n, the drift f(x(t), t) n, the diffusion vielbein e(x(t), t) n×p, and ν(t) is δ-correlated with covariance matrix Q(t) p×p. When the diffusion vielbein is independent of the state x(t), the noise is termed additive. It is the additive noise that is studied here, since it enables the use of functional methods common in quantum field theory.

Due to the random noise, each system leads to a different vector x(t) that depends on time. Although only one realization of the stochastic process is ever observed, it is meaningful to speak about an ensemble average. For fixed times t = t i , i = 1, 2, ..., r, the probability density of finding the random vector x(t) in the (n-dimensional) interval x i x(t i ) ≤ x i + dx i (1 ≤ ir) is given by

${W}_{r}\left({t}_{r},{x}_{r};\cdots ;{t}_{1},{x}_{1}\right)=〈\prod _{i=1}^{r}{\delta }^{n}\left(x\left({t}_{i}\right)-{x}_{i}\right)〉,$
(2)

where x i is an n-dimensional column vector and · denotes averaging with respect to the signal model noise ν(t). The complete information on the random vector x(t) is contained in the infinite hierarchy of such probability densities. The quantity of interest here is the conditional probability density

$\begin{array}{llll}p\left({t}_{r},{x}_{r}|{t}_{r-1},{x}_{r-1},...;{t}_{1},{x}_{1}\right)\hfill & =\hfill & 〈{\delta }^{n}\left(x\left({t}_{r}\right)\right)〉{|}_{x\left({t}_{r-1}\right)={x}_{r-1,...,x\left({t}_{1}\right)={x}_{1}}},\hfill & x\left({t}_{r}\right)\equiv {x}_{r}\hfill \\ =\hfill & \frac{{W}_{r}\left({t}_{r},{x}_{r};...;{t}_{1},{x}_{1}\right)}{\int {W}_{n}\left({t}_{r},{x}_{r};...;{t}_{1},{x}_{1}\right)\left\{{d}^{n}{x}_{r}\right\}}.\hfill \end{array}$
(3)

Now the process described by the Langevin equation with δ-correlated Langevin force is a Markov process, i.e., the conditional probability density depends only on the value at the immediate previous time:

p(t n , x n |tn-1, xn-1; ...; t1, x1) = p(t n , x n |tn-1, xn-1).

It can be shown that the transition probability satisfies the Fokker-Planck-Kolmogorov forward equation (FPKfe) (see, for instance, )

$\begin{array}{lll}\frac{\partial p}{\partial t}\left(t,x\right)\hfill & =\hfill & -\sum _{i=1}^{n}\frac{\partial }{\partial {x}_{i}}\left[{f}_{i}\left(x\left(t\right),t\right)p\left(t,x\right)\right]+\frac{1}{2}\sum _{i,j=1}^{n}\sum _{a=1}^{p}\frac{{\partial }^{2}}{\partial {x}_{i}\partial {x}_{j}}\left[{\left(e\left(x\left(t\right),t\right)Q\left(t\right)\right)}_{ia}{e}_{aj}^{T}\left(x\left(t\right),t\right)p\left(t,x\right)\right)\right],\hfill \\ \equiv \hfill & {\mathcal{L}}_{p}\left(t,x\right).\hfill \end{array}$
(5)

Finally, the Gaussian noise process ν(t) can be represented by the following path integral measure

$\begin{array}{cc}\left[d\rho \left(\nu \left(t\right)\right)\right]=\left[\mathcal{D}\nu \left(t\right)\right]\mathrm{exp}\left[-\frac{1}{2}\sum _{a,b=1}^{p}\int {\nu }_{a}\left(t\right){\left[{Q}^{-1}\left(t\right)\right]}_{ab}{\nu }_{b}\left(t\right)dt\right],& \nu \in {ℝ}^{p}.\end{array}$
(6)

where ν(t) is a real vector for each t. This leads to a configuration space FPI formula for the fundamental solution of the FPKfe (see, for instance, ). The path integral formula for the fundamental solution for the FPkfe is applied to the continuous-discrete filtering problem with additive (state model) noise in [20, 21].

### 2.2 The Continuous-Continuous Filtering Problem

Similarly, the continuous-continuous model can be written as follows:

$\left\{\begin{array}{cccc}\stackrel{˙}{x}\left(t\right)& =f\left(x\left(t\right),t\right)+e\left(x\left(t\right),t\right)\nu \left(t\right),& x\left(0\right)={x}_{0},& Q\left(t\right)\in {ℝ}^{p×p}\\ \stackrel{˙}{y}\left(t\right)& =h\left(x\left(t\right),t\right)+n\left(x\left(t\right),t\right)\mu \left(t\right),& y\left(0\right)={y}_{0},& R\left(t\right)\in {ℝ}^{q×q}\end{array}$
(7)

Here, y(t) m, the measurement model drift h(x(t), t) m, the diffusion vielbein n(x(t), t) m×q, and μ(t) is δ-correlated with covariance matrix W(t) q×q.

Thus, in continuous-continuous filtering, the continuous-time measurement stochastic process needs to be incorporated as well. Consider another ensemble of systems with state variables whose time evolution is governed by the measurement process. The measurement noise means that each system in the ensemble leads to a different time-dependent vector y(t). Thus, even though only one realization of the measurement stochastic process is observed, it is still meaningful to talk about an ensemble average of the measurement process (in addition to one over the state process). Thus, the quantity of interest in continuous-continuous filtering is the conditional probability density

$P\left({t}_{2},{x}_{2};{y}_{2}|{t}_{1},{x}_{1};{y}_{1}\right)={〈〈{\delta }^{n}\left(x\left({t}_{2}\right)-{x}_{2}\right){\delta }^{m}\left(y\left({t}_{2}\right)-{y}_{2}\right)〉〉}_{\mu }{|}_{x\left({t}_{1}\right)={x}_{1},y\left({t}_{1}\right)={y}_{1}},$
(8)

where · μ denotes averaging with respect to the measurement noise μ(t). A crucial difference between the state and measurement stochastic process is that, unlike the state, the measurement samples are known.

Note that the conditional transition probability density is the complete solution to the continuous-continuous filtering problem, since if the initial distribution is u(ti-1, x'|Yi-1), where Yi-1 is the set of all measurements prior to ti-1, then the evolved conditional probability distribution is

u(t, x|Y i ) = ∫ P(t, x; y i |ti-1, x'; yi-1) u (ti-1, x'|Yi-1) {dnx'}.

In the following sections, the path integral formulas for P(t2, x2; y2|t1, x1; y1) are derived. In Section 4 it shall be shown that it leads to the Yau algorithm. It shall be shown that the YYe plays the same role here that the FPKfe does in continuous-discrete filtering.

## 3 Path integral formula for the conditional transition probability density

The path integral formula for the conditional transition probability density shall now be derived using functional methods. Note that implicit in the use of these formal functional methods is the use of the Feynman convention, or symmetric discretization for the drift.

As noted in Section 2, the transition probability density is computed by averaging over the signal and measurement ensembles, i.e.,

$\begin{array}{lll}P\left(t,x;y|{t}_{0},{x}_{0};{y}_{0}\right)\hfill & =\hfill & \int \left[d\rho \left(\nu \left(t,{t}_{0}\right)\right)\right]\left[d\rho \left(\mu \left(t,{t}_{0}\right)\right)\right]\hfill \\ ×\left[\mathcal{D}\stackrel{˙}{x}\left(t\right)\right]J{\delta }^{n}\left[\stackrel{˙}{x}\left(t\right)-f\left(x\left(t\right),t\right)-e\left(t\right)\nu \left(t\right)\right]\hfill \\ ×\left[\mathcal{D}\stackrel{˙}{y}\left(t\right)\right]{J}_{y}{\delta }^{m}\left[\stackrel{˙}{y}\left(t\right)-h\left(x\left(t\right),t\right)-n\left(t\right)\mu \left(t\right)\right]\hfill \\ ×{\delta }^{n}\left[x\left(t\right)-x\right]{|}_{x\left({t}_{0}\right)={x}_{0}}{\delta }^{m}\left[y\left(t\right)-y\right]{|}_{y\left({t}_{0}\right)={y}_{0}}.\hfill \end{array}$
(10)

From the assumptions of the signal and measurement noise processes, it is evident that

$\left[d\rho \left(\nu \left(t,{t}_{0}\right)\right)\right]=\left[\mathcal{D}\nu \left(t,{t}_{0}\right)\right]\mathrm{exp}\left(-\frac{1}{2}\sum _{a,b=1}^{p}{\int }_{{t}_{0}}^{t}{\nu }_{a}\left(t\right){\left[{Q}^{-1}\left(t\right)\right]}_{ab}{\nu }_{b}\left(t\right)dt\right),$
(11)
$\left[d\rho \left(\mu \left(t,{t}_{0}\right)\right)\right]=\left[\mathcal{D}\mu \left(t,{t}_{0}\right)\right]\mathrm{exp}\left(-\frac{1}{2}\sum _{c,d=1}^{p}{\int }_{{t}_{0}}^{t}{\mu }_{c}\left(t\right){\left[{W}^{-1}\left(t\right)\right]}_{cd}{\mu }_{d}\left(t\right)dt\right).$
(12)

The Jacobian J follows from the functional derivative of the Langevin equation:

$\begin{array}{lll}\frac{\delta }{\delta {x}_{j}\left({t}^{\prime }\right)}\left[{\stackrel{˙}{x}}_{i}\left(t\right)-{f}_{i}\left(x\left(t\right),t\right)-\sum _{a=1}^{p}{e}_{ia}\left(t\right){\nu }_{a}\left(t\right)\right]\hfill & =\hfill & \left[{\delta }_{ij}\frac{d}{dt}-\frac{\partial {f}_{i}}{\partial {x}_{j}}\left(x\left(t\right),t\right)\right]\delta \left(t-{t}^{\prime }\right),\hfill \\ =\hfill & -\frac{d}{d{t}^{\prime }}\left[{\delta }_{ij}\delta \left(t-{t}^{\prime }\right)-\theta \left(t-{t}^{\prime }\right)\frac{\partial {f}_{i}}{\partial {x}_{j}}\left(x\left(t\right),t\right)\right].\hfill \end{array}$
(13)

Hence,

$\begin{array}{c}J=\mathrm{det}\left(-\frac{d}{d{t}^{\prime }}\right)\mathrm{det}\left({\delta }_{ij}\delta \left(t-{t}^{\prime }\right)-\theta \left(t-{t}^{\prime }\right)\frac{\partial {f}_{i}}{\partial {x}_{j}}\left(x\left(t\right),t\right)\right),\\ =\mathcal{N}\mathrm{det}\left({\delta }_{ij}\delta \left(t-{t}^{\prime }\right)-\theta \left(t-{t}^{\prime }\right)\frac{\partial {f}_{i}}{\partial {x}_{j}}\left(x\left(t\right),t\right)\right)\right),\end{array}$
(14)

where $\mathcal{N}$ is an irrelevant constant, or,

$\begin{array}{c}\mathrm{ln}J=\mathrm{ln}\mathrm{det}\left[{\delta }_{ij}\delta \left(t-{t}^{\prime }\right)-\theta \left(t-{t}^{\prime }\right)\frac{\partial {f}_{i}}{\partial {x}_{j}}\left(x\left(t\right),t\right)\right],\\ =-\frac{1}{2}\sum _{i=1}^{n}\int \frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\left(t\right),t\right)dt.\end{array}$
(15)

The Jacobian J y is trivial (as the measurement model drift is y-independent) and can be absorbed into the measure.

It is noteworthy that J is not trivial. In quantum field theory, nontrivial Jacobians usually imply that there is an anomaly, as in the case of chiral anomalies in gauge theories. However, there is no reason for an anomaly here; after all, this is not even a quantum field theoretical system. The puzzle is resolved by noting that path integral anomalies in quantum field theory arise from the "multiplicative" part in the change of variables (i.e., ψ (x) → U(x) ψ (x)). In contrast, the nontrivial Jacobian term here arises from the additive term; the multiplicative term does not contribute to the Jacobian, in accordance with expectations.

Thus, so far,

$\begin{array}{lll}P\left(t,x;y|{t}_{0},{x}_{0};{y}_{0}\right)\hfill & =\hfill & {\int }_{y\left({t}_{0}\right)={y}_{0}}^{y\left(t\right)=y}{\int }_{x\left({t}_{0}\right)={x}_{0}}^{x\left(t\right)=x}\left[\mathcal{D}x\left(t\right)\right]\left[\mathcal{D}y\left(t\right)\right]\left[d\rho \left(\nu \left(t,{t}_{0}\right)\right)\right]\left[d\rho \left(\mu \left(t,{t}_{0}\right)\right)\right]\hfill \\ ×{\delta }^{n}\left({\stackrel{˙}{x}}_{i}\left(t\right)-{f}_{i}\left(x\left(t\right),t\right)-\sum _{a=1}^{p}{e}_{ia}\left(t\right){\nu }_{a}\left(t\right)\right)\mathrm{exp}\left(-\frac{1}{2}\sum _{i=1}^{n}{\int }_{{t}_{0}}^{t}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\left(t\right),t\right)dt\right)\hfill \\ ×{\delta }^{m}\left({\stackrel{˙}{y}}_{k}\left(t\right)-{h}_{k}\left(x\left(t\right),t\right)-\sum _{c=1}^{q}{n}_{kc}\left(t\right){\mu }_{c}\left(t\right)\right).\hfill \end{array}$
(16)

Using the Fourier integral version of the delta function

$\begin{array}{lll}P\left(t,x;y|{t}_{0},{x}_{0};{y}_{0}\right)\hfill & =\hfill & {\int }_{y\left({t}_{0}\right)={y}_{0}}^{y\left(t\right)=y}{\int }_{x\left({t}_{0}\right)={x}_{0}}^{x\left(t\right)=x}\left[\mathcal{D}x\left(t\right)\right]\left[\mathcal{D}y\left(t\right)\right]\left[\mathcal{D}\nu \left(t,{t}_{0}\right)\right]\left[\mathcal{D}\mu \left(t,{t}_{0}\right)\right]\hfill \\ \left[D\lambda \left(t,{t}_{0}\right)\right]\mathrm{exp}\left(i\sum _{i=1}^{n}{\int }_{{t}_{0}}^{t}{\lambda }_{i}\left(t\right)\left[{\stackrel{˙}{x}}_{i}\left(t\right)-{f}_{i}\left(x\left(t\right),t\right)-\sum _{a=1}^{p}{e}_{ia}\left(t\right){\nu }_{a}\left(t\right)\right]dt\right)\hfill \\ \left[D\kappa \left(t,{t}_{0}\right)\right]\mathrm{exp}\left(i\sum _{k=1}^{n}{\int }_{{t}_{0}}^{t}{\kappa }_{k}\left(t\right)\left[{\stackrel{˙}{y}}_{k}\left(t\right)-{h}_{k}\left(x\left(t\right),t\right)-\sum _{c=1}^{q}{n}_{kc}\left(t\right){\mu }_{c}\left(t\right)\right]dt\right)\hfill \\ ×\mathrm{exp}\left(-\frac{1}{2}\sum _{a,b=1}^{p}{\int }_{{t}_{0}}^{t}{\nu }_{a}\left(t\right){\left({Q}^{-1}\left(t\right)\right)}_{ab}{\nu }_{b}\left(t\right)dt\right)×\mathrm{exp}\left(-\frac{1}{2}\sum _{i=1}^{n}{\int }_{{t}_{0}}^{t}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\left(t\right),t\right)dt\right)\hfill \\ ×\mathrm{exp}\left(-\frac{1}{2}\sum _{c,d=1}^{q}{\int }_{{t}_{0}}^{t}{\mu }_{c}\left(t\right){\left({W}^{-1}\left(t\right)\right)}_{cd}{\mu }_{d}\left(t\right)dt\right).\hfill \end{array}$
(17)

Integrating over ν(t, t0), and μ(t, t0) leads to

$\begin{array}{c}P\left(t,x;y|{t}_{0},{x}_{0};{y}_{0}\right)={\int }_{y\left({t}_{0}\right)={y}_{0}}^{y\left(t\right)=y}{\int }_{x\left({t}_{0}\right)={x}_{0}}^{x\left(t\right)=x}\left[\mathcal{D}x\left(t\right)\right]\left[\mathcal{D}y\left(t\right)\right]\left[\mathcal{D}\lambda \left(t,{t}_{0}\right)\right]\left[\mathcal{D}\kappa \left(t,{t}_{0}\right)\right]\mathrm{exp}\left(-\frac{1}{2}\sum _{i=1}^{n}\int \frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\left(t\right),t\right)dt\right)\\ ×\mathrm{exp}\left(-{\int }_{{t}_{0}}^{t}\left[\sum _{i,j=1}^{n}\sum _{a,b=1}^{p}{\lambda }_{i}\left(t\right){e}_{ia}\left(t\right){Q}_{ab}\left(t\right){e}_{bj}^{T}\left(t\right){\lambda }_{j}\left(t\right)-i\sum _{i=1}^{n}{\lambda }_{i}\left(t\right)\left({\stackrel{˙}{x}}_{i}\left(t\right)-{f}_{i}\left(x\left(t\right),t\right)\right)\right]dt\right)\\ ×\mathrm{exp}\left(-{\int }_{{t}_{0}}^{t}\left[\sum _{k,l=1}^{m}\sum _{c,d=1}^{q}{\kappa }_{k}\left(t\right){n}_{kc}\left(t\right){W}_{cd}\left(t\right){n}_{dl}^{T}\left(t\right){\kappa }_{l}\left(t\right)-i\sum _{k=1}^{m}{\kappa }_{k}\left(t\right)\left({\stackrel{˙}{y}}_{k}\left(t\right)-{h}_{k}\left(x\left(t\right),t\right)\right)\right]dt\right)\end{array}$
(18)

Integrating over λ(t, t0) and κ(t, t0), it is clear that

$P\left(t,x;y|{t}_{0},{x}_{0};{y}_{0}\right)={\int }_{y\left({t}_{0}\right)={y}_{0}}^{y\left(t\right)=y}{\int }_{x\left({t}_{0}\right)={x}_{0}}^{x\left(t\right)=y}\left[\mathcal{D}x\left(t\right)\right]\left[\mathcal{D}y\left(t\right)\right]\mathrm{exp}\left[-S\left({t}_{0},t\right)\right],$
(19)

where the action is given by

$\begin{array}{l}S\left({t}_{0},t\right)=\frac{1}{2}\sum _{i,j=1}^{n}\sum _{a,b=1}^{p}{\int }_{{t}_{0}}^{t}\left[{\stackrel{˙}{x}}_{i}\left(t\right)-{f}_{i}\left(x\left(t\right),t\right)\right]{\left({e}_{ia}\left(t\right){Q}_{ab}\left(t\right){e}_{bj}^{T}\left(t\right)\right)}^{-1}\left[{\stackrel{˙}{x}}_{j}\left(t\right)-{f}_{j}\left(x\left(t\right),t\right)\right]dt\hfill \\ +\frac{1}{2}{\int }_{{t}_{0}}^{t}\left[\sum _{k,l=1}^{n}\sum _{c,d=1}^{q}\left[{\stackrel{˙}{y}}_{k}\left(t\right)-{h}_{k}\left(x\left(t\right),t\right)\right]{\left({n}_{kc}\left(t\right){W}_{cd}\left(t\right){n}_{dl}^{T}\left(t\right)\right)}^{-1}\left[{\stackrel{˙}{y}}_{l}\left(t\right)-{h}_{l}\left(x\left(t\right),t\right)\right]+\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\left(t\right),t\right)\right]dt.\hfill \end{array}$
(20)

## 4 Derivation of the Yau algorithm

Observe that the path integral formula derived in the previous section is over both the state and measurement variables. In this section, we shall show that in some cases it is possible to reduce the result to a path integral over the state variables only. This has the advantage of being pre-computable since it is independent of measurements. It shall be shown to lead to the Yau algorithm.

### 4.1 Sampled Continuous Measurements

Suppose measurements are available at time ti-1 and t i , and that the conditional transition probability density at time t i is desired. Further, assume that there are no measurements available between ti-1 and t i . The general path integral formula (Equation 19) cannot be simplified unless some additional assumptions are made.

First, consider the case where e(t)Q(t)eT(t) and n(t)W(t)nT(t) are ħ ν In×nand ħ μ Im×mand h(x(t)) is not explicitly time dependent. Then, the contribution to the action due to the measurement process is

$-\frac{1}{2{\hslash }_{\mu }}\sum _{k=1}^{m}{\int }_{{t}_{i}-1}^{{t}_{i}}dt{\left[{\stackrel{˙}{y}}_{k}-{h}_{k}\left(x\left(t\right)\right)\right]}^{2}=-\frac{1}{2{\hslash }_{\mu }}\sum _{k=1}^{m}{\int }_{{t}_{i-1}}^{{t}_{i}}dt\left[{\stackrel{˙}{y}}_{k}^{2}\left(t\right)+{h}_{k}^{2}\left(x\left(t\right)\right)-2{h}_{k}\left(x\left(t\right)\right){\stackrel{˙}{y}}_{i}\left(t\right)\right].$
(21)

The quantity of interest is the state and we seek to integrate over the measurement variables.

Now the first term is independent of the state variables. The second term can be added to the action term that is independent of y(t). It remains to investigate the contribution of the third term:

$\frac{1}{{\hslash }_{\mu }}\sum _{k=1}^{m}{\int }_{{t}_{i-1}}^{{t}_{i}}{h}_{k}\left(x\left(t\right)\right){\stackrel{˙}{y}}_{k}\left(t\right)dt.$
(22)

There are two issues in this evaluation. Firstly, this can be evaluated via the usual integration by parts, but it is important to note that it is valid only for symmetric discretization. Secondly, since the measurements are sampled, there are two possible interpretations when t i - ti-1 = ϵ, where ϵ is an infinitesimal:

$\frac{1}{{\hslash }_{\mu }}\sum _{k=1}^{m}{\int }_{{t}_{i-1}}^{{t}_{i}}{h}_{k}\left(x\left(t\right)\right){\stackrel{˙}{y}}_{k}\left(t\right)dt=\left\{\begin{array}{l}\frac{1}{{\hslash }_{\mu }}{\sum }_{k=1}^{m}{\int }_{{t}_{i-1}}^{{t}_{i}}{h}_{k}\left(x\left(t\right)\right){\stackrel{˙}{y}}_{k}\left(t\right)dt,\hfill \\ \frac{1}{{\hslash }_{\mu }}{\sum }_{k=1}^{m}{\int }_{{t}_{i-1}-ϵ}^{{t}_{i}-ϵ}{h}_{k}\left(x\left(t\right)\right){\stackrel{˙}{y}}_{k}\left(t\right)dt.\hfill \end{array}$
(23)

$\frac{1}{{\hslash }_{\mu }}\sum _{k=1}^{m}{\int }_{{t}_{i-1}}^{{t}_{i}}{h}_{k}\left(x\left(t\right)\right){\stackrel{˙}{y}}_{k}\left(t\right)dt=\left\{\begin{array}{l}\frac{1}{{\hslash }_{\mu }}{\sum }_{k=1}^{m}{h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i}\right)+x\left({t}_{i-1}\right)\right]\right)\left[{y}_{k}\left({t}_{i}\right)-{y}_{k}\left({t}_{i-1}\right)\right],\hfill \\ \frac{1}{{\hslash }_{\mu }}{\sum }_{k=1}^{m}{h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i-1}\right)+x\left({t}_{i-2}\right)\right]\right)\left[{y}_{k}\left({t}_{i-1}\right)-{y}_{k}\left({t}_{i-2}\right)\right].\hfill \end{array}$
(24)

Finally, the residual Gaussian path integral over y(t) can be ignored as it is independent of the state. Therefore, the path integral formula simplifies to

$\begin{array}{lll}P\left({t}_{i},{x}_{i};{y}_{i}|{t}_{i-1},{x}_{i-1};{y}_{i-1}\right)\hfill & =\hfill & \stackrel{˜}{P}\left({t}_{i},{x}_{i}|{t}_{i-1},{x}_{i-1}\right)\hfill \\ ×\left\{\begin{array}{l}\mathrm{exp}\left(\frac{1}{{\hslash }_{\mu }}{\sum }_{k=1}^{m}{h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i}\right)+x\left({t}_{i-1}\right)\right]\right)\left[{y}_{k}\left({t}_{i}\right)-{y}_{k}\left({t}_{i-1}\right)\right]\right),\hfill \\ \mathrm{exp}\left(\frac{1}{{\hslash }_{\mu }}{\sum }_{k=1}^{m}{h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i-1}\right)+x\left({t}_{i-2}\right)\right]\right)\left[{y}_{k}\left({t}_{i-1}\right)-{y}_{k}\left({t}_{i-2}\right)\right]\right),\hfill \end{array}\hfill \end{array}$
(25)

where

$\stackrel{˜}{P}\left({t}_{i},{x}_{i}|{t}_{i-1},{x}_{i-1}\right)={\int }_{x\left({t}_{i-1}\right)={x}_{i-1}}^{x\left({t}_{i}\right)={x}_{i}}\left[\mathcal{D}x\left(t\right)\right]\mathrm{exp}\left(-S\left({t}_{i-1},{t}_{i}\right)\right),$
(26)

and

$S\left({t}_{i-1},{t}_{i}\right)=\frac{1}{2}{\int }_{{t}_{i-1}}^{{t}_{i}}dt\left[\frac{1}{{\hslash }_{\nu }}\sum _{i=1}^{n}{\left[{\stackrel{˙}{x}}_{i}\left(t\right)-{f}_{i}\left(x\left(t\right)\right]}^{2}+\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\left(t\right)\right)+\frac{1}{{\hslash }_{\mu }}\sum _{k=1}^{m}{h}_{k}^{2}\left(x\left(t\right)\right)\right].$
(27)

Secondly, if the conditions above are relaxed to allowing explicit time dependence of the drift term in the state model, i.e., f(x(t), t), and and C = (n(t)W(t)nT(t)), then the path integral formula becomes

$\begin{array}{lll}P\left({t}_{i},{x}_{i};{y}_{i}|{t}_{i-1},{x}_{i-1};{y}_{i-1}\right)\hfill & =\hfill & \stackrel{˜}{P}\left({t}_{i},{x}_{i}|{t}_{i-1},{x}_{i-1}\right)\hfill \\ ×\left\{\begin{array}{l}\mathrm{exp}\left(\left(\frac{1}{{\hslash }_{\mu }}{\sum }_{k,l=1}^{m}{h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i}\right)+x\left({t}_{i-1}\right)\right]\right){\left({C}^{-1}\right)}_{kl}\left[{y}_{l}\left({t}_{i}\right)-{y}_{l}\left({t}_{i-1}\right)\right]\right),\hfill \\ \mathrm{exp}\left(\left(\frac{1}{{\hslash }_{\mu }}{\sum }_{k,l=1}^{m}{h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i-1}\right)+x\left({t}_{i-2}\right)\right]\right){\left({C}^{-1}\right)}_{kl}\left[{y}_{l}\left({t}_{i-1}\right)-{y}_{l}\left({t}_{i-2}\right)\right]\right),\hfill \end{array}\hfill \end{array}$
(28)

where

$\stackrel{˜}{P}\left({t}_{i},{x}_{i}|{t}_{i-1},{x}_{i-1}\right)={\int }_{x\left({t}_{i-1}\right)={x}_{i-1}}^{x\left({t}_{i}\right)={x}_{i}}\left[Dx\left(t\right)\right]\mathrm{exp}\left(-S\left({t}_{i-1},{t}_{i}\right)\right),$
(29)

and the action is given by

$S\left({t}_{i-1},{t}_{i}\right)=\frac{1}{2}{\int }_{{t}_{i-1}}^{{t}_{i}}dt\left[\frac{1}{{\hslash }_{\nu }}\sum _{i=1}^{n}{\left[{\stackrel{˙}{x}}_{i}\left(t\right)-{f}_{i}\left(x\left(t\right),t\right)\right]}^{2}+\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\left(t\right),t\right)+\sum _{k,l=1}^{m}{h}_{k}\left(x\left(t\right)\right){\left({C}^{-1}\right)}_{kl}{h}_{l}\left(x\left(t\right)\right)\right].$
(30)

Finally, consider the case when e(t)Q(t)e(t) is time-independent and invertible, but otherwise arbitrary, and C = (n(t)W(t)nT(t)). Note that C is a constant, symmetric matrix and assumed to be invertible. The path integral formula is given by Equations 28 and 29 and with the action S(ti-1, t i ) is given by

$\begin{array}{c}S\left({t}_{i-1},{t}_{i}\right)=\frac{1}{2}{\int }_{{t}_{i-1}}^{{t}_{i}}dt\sum _{i,j=1}^{n}\sum _{a,b=1}^{p}\left[{\stackrel{˙}{x}}_{i}\left(t\right)-{f}_{i}\left(x\left(t\right),t\right)\right]{\left[{e}_{ia}\left(t\right){Q}_{ab}\left(t\right){e}_{bj}^{T}\left(t\right)\right]}^{-1}\left[{\stackrel{˙}{x}}_{j}\left(t\right)-{f}_{j}\left(x\left(t\right),t\right)\right]\\ +\frac{1}{2}{\int }_{{t}_{i-1}}^{{t}_{i}}dt\left[\sum _{k,l=1}^{m}{h}_{k}\left(x\left(t\right)\right){\left({C}^{-1}\right)}_{kl}{h}_{l}\left(x\left(t\right)\right)+\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\left(t\right),t\right)\right].\end{array}$
(31)

### 4.2 The Yau Algorithm

In Section 2 it was noted that if vi-1 (ti-1, xi-1) is the conditional probability density at time ti-1, then the conditional probability density at time t i is given by

v i (t i , x i ) = ∫ P (t i , x i ; y i |ti-1, xi-1; yi-1) vi-1 (ti-1, xi-1) {dnxi-1}.

For simplicity, let us first consider the case e(t)Q(t)eT(t) and n(t)W(t)nT(t) are ħ ν In×nand ħ μ Im×m, and h(x(t)) is not explicitly time dependent. Then

$\begin{array}{l}{v}_{i}\left({t}_{i},{x}_{i}\right)=\\ \left\{\begin{array}{l}\int \stackrel{˜}{P}\left({t}_{i},{x}_{i}|{t}_{i-1},{x}_{i-1}\right)\mathrm{exp}\left(-\frac{1}{{\hslash }_{\nu }}{\sum }_{k=1}^{m}{h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i}\right)+x\left({t}_{i-1}\right)\right]\right)\left[{y}_{k}\left({t}_{i}\right)-{y}_{k}\left({t}_{i-1}\right)\right]\right){v}_{i-1}\left({t}_{i-1},{x}_{i-1}\right)\left\{{d}^{n}{x}_{i-1}\right\},\hfill \\ \int \stackrel{˜}{P}\left({t}_{i},{x}_{i}|{t}_{i-1},{x}_{i-1}\right)\mathrm{exp}\left(-\frac{1}{{\hslash }_{\nu }}{\sum }_{k=1}^{m}{h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i-1}\right)+x\left({t}_{i-2}\right)\right]\right)\left[{y}_{k}\left({t}_{i-1}\right)-{y}_{k}\left({t}_{i-2}\right)\right]\right){v}_{i-1}\left({t}_{i-1},{x}_{i-1}\right)\left\{{d}^{n}{x}_{i-1}\right\}.\hfill \end{array}\end{array}$
(33)

When t i - ti-1 is small

${h}_{k}\left(\frac{1}{2}\left[x\left({t}_{i}\right)+x\left({t}_{i-1}\right)\right]\right)\approx {h}_{k}\left(x\left({t}_{i-1}\right)\right)+O\left(\sqrt{ϵ}\right),$
(34)

and Equation 33 becomes

${v}_{i}=\left({t}_{i},{x}_{i}\right)=\left\{\begin{array}{l}\int \stackrel{˜}{P}\left({t}_{i},{x}_{i}|{t}_{i-1},{x}_{i-1}\right)\mathrm{exp}\left(-\frac{1}{{\hslash }_{\nu }}{\sum }_{k=1}^{m}{h}_{k}\left(x\left({t}_{i-1}\right)\right)\left[{y}_{k}\left({t}_{i}\right)-{y}_{k}\left({t}_{i-1}\right)\right]\right){v}_{i-1}\left({t}_{i-1},{x}_{i-1}\right)\left\{{d}^{n}{x}_{i-1}\right\},\hfill \\ \int \stackrel{˜}{P}\left({t}_{i},{x}_{i}|{t}_{i-1},{x}_{i-1}\right)\mathrm{exp}\left(-\frac{1}{{\hslash }_{\nu }}{\sum }_{k=1}^{m}{h}_{k}\left(x\left({t}_{i-1}\right)\right)\left[{y}_{k}\left({t}_{i-1}\right)-{y}_{k}\left({t}_{i-2}\right)\right]\right){v}_{i-1}\left({t}_{i-1},{x}_{i-1}\right)\left\{{d}^{n}{x}_{i-1}\right\}.\hfill \end{array}$
(35)

Following the method originally used by Feynman, the partial differential equation satisfied by $\stackrel{˜}{P}$ (t, x|t0, x0) may be derived (see, for instance, ). In particular, $\stackrel{˜}{P}$ (t i , x i |ti-1, xi-1) is the fundamental solution of the Yau Equation(YYe):

$\left\{\begin{array}{l}\frac{\partial \stackrel{\sim }{P}}{\partial t}\left(t,x|{t}_{0},{x}_{0}\right)=\frac{{\hslash }_{\nu }}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}\stackrel{\sim }{P}}{\partial {x}_{i}^{2}}\left(t,x|{t}_{0},{x}_{0}\right)-\sum _{i=1}^{n}\frac{\partial }{\partial {x}_{i}}\left[{f}_{i}\left(x\right)\stackrel{\sim }{P}\left(t,x|{t}_{0},{x}_{0}\right)\right]-\frac{1}{2{\hslash }_{\mu }}\sum _{k=1}^{m}{h}_{k}^{2}\left(x\right)\stackrel{\sim }{P}\left(t,x|{t}_{0},{x}_{0}\right),\hfill \\ \stackrel{\sim }{P}\left({t}_{0},x|{t}_{0},{x}_{0}\right)={\delta }^{n}\left(x-{x}_{0}\right).\hfill \end{array}$
(36)

This implies that v i (t i , x) is the solution at t i of

$\left\{\begin{array}{l}\frac{\partial {v}_{i}}{\partial t}\left(t,x\right)=\frac{1}{2}\sum _{l=1}^{n}\frac{{\partial }^{2}{v}_{i}}{\partial {x}_{l}^{2}}\left(t,x\right)-\sum _{l=1}^{n}{f}_{l}\left(x\right)\frac{\partial {v}_{i}}{\partial {x}_{l}}\left(t,x\right)-\left(\sum _{l=1}^{n}\frac{\partial {f}_{l}}{\partial {x}_{l}}\left(x\right)+\frac{1}{2}\sum _{l=1}^{m}{h}_{l}^{2}\left(x\right)\right){v}_{i}\left(t,x\right),\hfill \\ {v}_{i}\left({t}_{i-1},x\right)={v}_{i-1}\left({t}_{i-1},x\right)\left\{\begin{array}{l}\mathrm{exp}\left({\sum }_{j=1}^{m}{h}_{j}\left(x\right)\left[{y}_{j}\left({t}_{i}\right)-{y}_{j}\left({t}_{i-1}\right)\right]\right),\hfill \\ \mathrm{exp}\left({\sum }_{j=1}^{m}{h}_{j}\left(x\right)\left[{y}_{j}\left({t}_{i-1}\right)-{y}_{j}\left({t}_{i-2}\right)\right]\right).\hfill \end{array}\hfill \end{array}$
(37)

This is precisely the Yau algorithm.

Likewise, it is straightforward to see that for the general case studied in Section 4.1, the Yau algorithm is extended to this case as follows: v i (t i , x i ) is the solution at t i of the PDE

$\left\{\begin{array}{c}\frac{\partial {v}_{i}}{\partial t}\left(t,x\right)=\frac{1}{2}\sum _{l,j=1}^{n}\sum _{a,b=1}^{p}\left({e}_{la}\left(t\right){Q}_{ab}\left(t\right){e}_{bj}^{T}\left(t\right)\right)\frac{{\partial }^{2}{v}_{i}}{\partial {x}_{l}\partial {x}_{j}}\left(t,x\right)\\ -\sum _{l=1}^{n}\frac{\partial }{\partial {x}_{l}}\left[{f}_{l}\left(x,t\right){v}_{i}\left(t,x\right)\right]-\frac{1}{2}\sum _{k,l=1}^{m}{h}_{k}\left(x\right){\left({C}^{-1}\right)}_{kl}{h}_{l}\left(x\left(t\right)\right){v}_{i}\left(t,x\right),\\ {v}_{i}\left({t}_{i-1},x\right)={v}_{i-1}\left({t}_{i-1},x\right)\left\{\begin{array}{l}\mathrm{exp}\left({\sum }_{j,k=1}^{m}{h}_{j}\left(x\right){\left({C}^{-1}\right)}_{jk}\left[{y}_{k}\left({t}_{i}\right)-{y}_{k}\left({t}_{i-1}\right)\right]\right),\hfill \\ \mathrm{exp}\left({\sum }_{j,k=1}^{m}{h}_{j}\left(x\right){\left({C}^{-1}\right)}_{jk}\left[{y}_{k}\left({t}_{i-1}\right)-{y}_{k}\left({t}_{i-2}\right)\right]\right).\hfill \end{array}\end{array}$
(38)

This is a straightforward generalization of the YYe to the state model with explicit time dependence.

## 5 Some remarks

Following are some remarks on the FPI solution of the filtering problem:

• Note that the FPI formulation has given a complete and self-contained solution of the continuous-continuous filtering problem. For instance, the DMZ equation or its variants were not used as an input. On the contrary, the FPI formula naturally led us to the YYe and the Yau algorithm.

• Note also that the DMZ equation (and variants) cannot be solved reliably in real-time as the various approximations assume that drift is bounded and require solution of a stochastic PDE that depend on measurements. In contrast, the general FPI formula presented in Section 3 can potentially be used for an efficient and reliable real-time solution. This is because the measurement time interval is usually small so that the simplest approximation of the path integral (termed the Dirac-Feynman approximation, see Section 6) is adequate. In contrast, in quantum mechanics and quantum field theory one is interested in the large time case.

• Unlike the Yau algorithm, the PI formula is valid even for the general time-dependent case for large measurement time interval. In other words, one can compute the conditional transition probablity density using the conventional methods (see, for instance, ).

• The YYe can be viewed as a local expression of the path integral formula. That is, a path integral is a global object, while the PDE is a local one.

• In this paper, the signal noise and measurement noise are assumed to be additive. This is a stronger condition than the orthogonalilty of the diffusion vielbein assumed in . The solution for the general case has been presented in .

• It is noted that other algorithms can also be solved using the FPI formulas with obvious changes. Usually, they require the solution of the FPKfe, which corresponds to the case h(x) = 0. Note that the FPKfe arises naturally in the solution of the continuous-discrete filtering problem . Of course, as noted in Appendix 1, it would be unnecessary since the Yau algorithm has the best numerical properties. What is interesting to note is that the path integral formula naturally leads to the algorithm with the best numerical properties.

• The Yau algorithm also has the form of the "prediction" and "correction" part, as in continuous-discrete filtering . Specifically, the prediction part is the solution of the YYe, whereas the correction part is the multiplicative factor in the initial condition. However, it is crucial to note that the the prediction part contains the measurement model drift. In contrast, the measurement model plays no role in the prediction part in continuous-discrete filtering.

## 6 Examples

### 6.1 The Dirac-Feynman Approximation

From the discussion in the previous sections, it is evident that computing the path integral requires computing the Lagrangian, L, defined by S = ∫ dt L(x, $\stackrel{˙}{x}$, t). The simplest (and crudest) approximation (for the case e(t)Q(t)eT(t) = ħ ν In×nand n(t) W (t)nT(t) = ħ μ Im×m) is to use the following approximation that is valid for infinitesimal time step:

$\begin{array}{l}\stackrel{˜}{P}\left({t}^{″},{x}^{″}|{t}^{\prime },{x}^{\prime }\right)=\hfill \\ \mathrm{exp}\left(-\frac{{t}^{″}-{t}^{\prime }}{2}\left\{\frac{1}{{\hslash }_{\nu }}\sum _{i=1}^{n}{\left[\frac{{{x}^{″}}_{i}-{{x}^{\prime }}_{i}}{{t}^{″}-{t}^{\prime }}-{f}_{i}\left(\frac{{x}^{″}+{x}^{\prime }}{2}\right)\right]}^{2}+\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(\frac{{x}^{″}+{x}^{\prime }}{2}\right)+\frac{1}{{\hslash }_{\mu }}\sum _{k=1}^{m}{h}_{k}^{2}\left(\frac{{x}^{″}+{x}^{\prime }}{2}\right)\right\}\right).\hfill \end{array}$
(39)

This is the Dirac-Feynman (DF) approximation. The algorithm that follows from applying the DF approximation to the Yau algorithm is the Dirac-Feynman-Yau (DFY) algorithm.

### 6.2 Example 1

As an example, consider the following continuous-continuous filtering model that has been studied in  (and )

$\begin{array}{l}\stackrel{˙}{x}\left(t\right)=a\mathrm{cos}\left(bx\left(t\right)\right)+{\sigma }_{x}\nu \left(t\right),\hfill \\ \stackrel{˙}{y}\left(t\right)={\mathrm{tan}}^{-1}x\left(t\right)+{\sigma }_{y}\mu \left(t\right).\hfill \end{array}$
(40)

The Lagrangian for this model is given by

$\frac{1}{2{\sigma }_{x}^{2}}{\left(\stackrel{˙}{x}\left(t\right)-a\mathrm{cos}\left(bx\left(t\right)\right)\right)}^{2}-\frac{ab}{2}\mathrm{sin}\left(bx\left(t\right)\right)+\frac{1}{2{\sigma }_{y}^{2}}{\left({\mathrm{tan}}^{-1}x\left(t\right)\right)}^{2}.$
(41)

The parameters chosen were as in the reference. Specifically, a = 1.2, b = 3, σ x = 0.3, spatial grid spacing Δx = 0.01 and extent [-1.5, 1.5], temporal grid spacing Δt = 0.01 with 200 time steps.

In the first set, the measurement noise was set as σ y = 0.05. Figure 1 shows a sample of state and measurement processes. The conditional mean, computed using the DFY algorithm, is plotted in Figure 2. Since there was negligible difference in performance between the pre-measurement and post-measurement forms, only the former was employed. Also plotted are 2σ limits. The conditional mean and variance were computed from the computed conditional probability density. The fact that the target was mostly within the 2σ limits of the conditional mean shows that the tracking performance of this algorithm is good.

In the next set, the measurement noise was set as σ y = 0.0125. For this "small noise" case, most of the algorithms studied in  failed. In Figure 3 is plotted a sample of signal and measurement processes. The conditional mean and the 2σ limits computed using the DFY algorithm for this instance is plotted in Figure 4.

It is seen that good tracking performance is maintained for the small noise case even when the crudest path integral approximation is used in the Yau algorithm.

### 6.3 Example 2: Cubic Sensor Problem

The cubic sensor problem is defined by the following signal and measurement model:

$\begin{array}{l}\stackrel{˙}{x}\left(t\right)={\sigma }_{x}\nu \left(t\right),\hfill \\ \stackrel{˙}{y}\left(t\right)={x}^{3}\left(t\right)+{\sigma }_{y}\mu \left(t\right).\hfill \end{array}$
(42)

It is a well-studied nonlinear filtering problem because it is one of the simplest examples of a filtering problem that is not finite dimensional (see, for instance,  and references therein).

For the simulation of the cubic sensor problem, the following model parameters were chosen (as in )

$\begin{array}{c}\Delta t=0.001,\\ \Delta x=0.01,\\ {x}_{0}=N\left(0,0.01\right),\\ {\sigma }_{x}=1,\\ {\sigma }_{y}=1,\\ T=\left[0,20\right].\end{array}$
(43)

The EKF is a sub-optimal filter which approximates the conditional probability by a Gaussian. For the cubic sensor problem the EKF is given by

$\begin{array}{c}d\stackrel{^}{x}=3{\stackrel{^}{x}}^{2}P\left(dy-{\stackrel{^}{x}}^{3}dt\right),\\ dP=\left(1-9{\stackrel{^}{x}}^{4}{P}^{2}\right)dt.\end{array}$
(44)

The Yau Lagrangian for the cubic sensor problem is

$L=\frac{1}{2{\sigma }_{x}^{2}}\left({\stackrel{˙}{x}}^{2}\right)+\frac{1}{2{\sigma }_{y}^{2}}{x}^{6}.$
(45)

The DF approximation the Yau Lagrangian is

$\stackrel{˜}{P}\left({t}^{″},{x}^{″}|{t}^{\prime },{x}^{\prime }\right)=\mathrm{exp}\left(-\frac{\left({t}^{″}-{t}^{\prime }\right)}{2{\sigma }_{x}^{2}}{\left(\frac{{x}^{″}-{x}^{\prime }}{{t}^{″}-{t}^{\prime }}\right)}^{2}-\left(\frac{{t}^{″}-{t}^{\prime }}{2{\sigma }_{y}^{2}}\right){\left(\frac{{x}^{″}+{x}^{\prime }}{2}\right)}^{6}\right).$
(46)

Figure 5 shows the performance of the DFY algorithm. Specifically, the conditional mean along with the two standard deviation bounds computed using the computed conditional probability density is plotted. Observe that the EKF fails completely in this case. As noted in , this is because the EKF considers only the first two moments (which vanish here); it is the fourth central moment that plays a crucial role in this example (for the chosen initial condition). Also, note that the state is within the 2σ region for most of the time. This shows that, unlike the EKF, the path integral filter has a reliable error analysis.

After an initial period, the performance of the path integral approximation is seen to be excellent and comparable to that obtained using PDE methods in . However, the crucial point is that the path integral solution is equally simple for the higher-dimensional case with more complicated models (e.g., colored noise), whereas a PDE solution would be significantly harder, if not impossible, to implement in real-time.

It is remarkable to note that very good performance is obtained using the crudest approximation. Of course, when the time step is large, it will fail (unlike the path integral formula itself). However, the practical situation is that the time steps are often small. Therefore, the DF approximation may be adequate in most cases.

The implementation of this method is trivial. The contrast with other methods, such as those studied in , is striking. For instance, many of those methods require off-line computation of complicated partial differential equations with uncertain numerical properties.

The results obtained in this paper used single time-step. More accuracy can be obtained quite simply using multiple time steps. Also, the computation of the transition probability density can be done off-line, but the on-line computation was not an onerous burden for the examples studied here.

It is important to note also that the transition probability density matrix (or tensor in the general case) is sparse (sparsity determined by ħ μ , ħ ν ). This is of great importance in higher dimensional filtering problems because

• Sparse matrix storage requirements are small,

• The relevant transition probability density matrix elements can be computed based on the conditional density in the previous step, and

• Sparse matrix computations are very fast.

Note that unlike some other approximation techniques studied in , the conditional probability density is obviously always positive (provided, of course, that the initial distribution is positive).

Finally, a comment on measures of performance. Note that a good tracker is one that furnishes not only a good estimate of a state but also provides a reliable measure of the quality of the estimate. For the linear, Gaussian case, the conditional mean and the variance are Gaussian and the Kalman estimates are optimal and provide a complete description. However, for the general nonlinear case, such a concise description is not possible. For instance, the conditional probability density may be highly skewed. It may be multi-modal, in which case the conditional mean is not a very meaningful quantity. A more general measure is to indicate domains of "significant" probability mass; a good filtering solution is then one that guarantees that the state is in the region of significant probability mass with a very high confidence. For the purposes of the paper, the conditional variance was chosen.

## 7 Conclusion

In this paper, the formal path integral solution to the continuous-continuous nonlinear filtering problem has been presented. The solution is universal, i.e., the initial distribution may be arbitrary. Since the path integral measure is manifestly positive, positivity is maintained if the initial distribution is positive.

A path integral formulation has several advantages. It is well known that Feynman path integrals have led to theoretical insights in other areas including quantum mechanics, quantum field theory and even pure mathematics. It is demonstrated in this paper that it is possible to express the fundamental solution of the YYe in terms of Feynman path integrals. Finally, Feynman path integrals are very suitable for numerical implementation. Practical path integral filtering techniques, especially for solving large dimensional problems, will be presented in subsequent papers.

## Appendix 1 continuous-continuous filtering and the Yau equation

In this section, the main results of (continuous-continuous) nonlinear filtering theory are summarized. For the general case (e.g., not the finite-dimensional filter case) and from a practical point of view the most important results are the YYe and the Yau algorithm.

### A.1 The Continuous-Continuous Model

The signal and observation model in continuous-continuous filtering is the following:

$\left\{\begin{array}{ll}dx\left(t\right)=f\left(x\left(t\right),t\right)dt+e\left(x\left(t\right),t\right)dv\left(t\right),\hfill & x\left(0\right)={x}_{0},\hfill \\ dy\left(t\right)=h\left(x\left(t\right),t\right)dt+n\left(t\right)dw\left(t\right),\hfill & y\left(0\right)=0.\hfill \end{array}$
(47)

Here x, v, y, and w are n, p, mand qvalued stochastic processes, respectively, and e(x(t), t) n×pand n(t) m×q. These are defined in the Itô sense. The Brownian processes v and w are assumed to be independent with p×p and q ×q covariance matrices Q(t) and W(t), respectively. We denote n(t)W (t)nT(t) by R(t), a m × m matrix. Also, f is referred to as the drift, e as the diffusion vielbein, and eQeTas the diffusion matrix.

In this section, some of the relevant work on continuous-continuous filtering is summarized. Hence, it is assumed that n = p, m = q, f and h are vector-valued C smooth functions, e(x, t) is an orthogonal matrix-valued C smooth function, Q(t) is a n × n identity matrix, and n(t) and R(t) are m × m identity matrices. No explicit time dependence is assumed in the model.

### A.2 The DMZ Stochastic Differential Equation

The unnormalized conditional probability density, σ (t, x) of the state given the observations {Y (s): 0 ≤ st} satisfies the DMZ equation:

$\begin{array}{cc}d\sigma \left(t,x\right)={\mathcal{L}}_{0}\sigma \left(t,x\right)dt+\sum _{i=1}^{n}{\mathcal{L}}_{i}\sigma \left(t,x\right)d{y}_{i}\left(t\right),& \sigma \left(0,x\right)={\sigma }_{0}.\end{array}$
(48)

Here

$\begin{array}{c}{\mathcal{L}}_{0}=\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}}{\partial {x}_{i}^{2}}-\sum _{i=1}^{n}{f}_{i}\left(x\right)\frac{\partial }{\partial {x}_{i}}-\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\right)-\frac{1}{2}\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right),\\ =-\sum _{i=1}^{n}\frac{\partial }{\partial {x}_{i}}\left({f}_{i}\left(x\right)\cdot \right)+\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}}{\partial {x}_{i}^{2}}-\frac{1}{2}\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right),\\ =\frac{1}{2}\left(\sum _{i=1}^{n}{D}_{i}^{2}-\eta \left(x\right)\right),\end{array}$
(49)

where ${\mathcal{L}}_{i}$ is the zero-degree differential operator of multiplication by h i (x), i = 1, ..., m, σ0 is the probability density of the initial time t0, and

$\begin{array}{c}{D}_{i}\equiv \frac{\partial }{\partial {x}_{i}}-{f}_{i}\left(x\right),\\ \eta \left(x\right)\equiv \sum _{i=1}^{n}{f}_{i}^{2}\left(x\right)+\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\right)+\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right).\end{array}$
(50)

The DMZ equation is to be interpreted in the Stratanovich sense. Note that

$\begin{array}{c}{D}_{i}=\frac{\partial }{\partial {x}_{i}}-{f}_{i}\left(x\right),\end{array}$
(51)
$\begin{array}{ccc}={e}^{{F}_{i}}\frac{\partial }{\partial {x}_{i}}{e}^{-{F}_{i}},& \text{where}& {F}_{i}={F}_{i}\left(x\right)={\int }_{0}^{x}{f}_{i}\left(t\right)dt.\end{array}$
(52)

Hence,

${D}_{i}^{2}={e}^{{F}_{i}}\frac{{\partial }^{2}}{\partial {x}_{i}^{2}}{e}^{-{F}_{i}},$
(53)

and

${\mathcal{L}}_{0}=\frac{1}{2}\left(\sum _{i=1}^{n}{e}^{{F}_{i}}\frac{{\partial }^{2}}{\partial {x}_{i}^{2}}{e}^{-{F}_{i}}-\eta \left(x\right)\right).$
(54)

### A.3 The Robust DMZ Partial Differential Equation

Following  and  introduce a new unnormalized density

$u\left(t,x\right)=\mathrm{exp}\left(-\sum _{i=1}^{m}{h}_{i}\left(x\right){y}_{i}\left(t\right)\right)\sigma \left(t,x\right).$
(55)

Under this transformation, the DMZ SDE is transformed into the following time-varying PDE

$\left\{\begin{array}{l}\frac{\partial u}{\partial t}\left(t,x\right)=\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}u}{\partial {x}_{i}^{2}}\left(t,u\right)+\sum _{i=1}^{n}\left(-{f}_{i}\left(x\right)+\sum _{j=1}^{m}{y}_{j}\left(t\right)\frac{\partial {h}_{j}}{\partial {x}_{i}}\left(x\right)\right)\frac{\partial u}{\partial {x}_{i}}\left(t,x\right)\hfill \\ -\left(\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\right)+\frac{1}{2}\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right)-\frac{1}{2}\sum _{i=1}^{m}{y}_{i}\left(t\right)\Delta {h}_{i}\left(x\right)+\sum _{i=1}^{m}\sum _{j=1}^{n}{y}_{i}\left(t\right){f}_{j}\left(x\right)\frac{\partial {h}_{i}}{\partial {x}_{j}}\left(x\right)\hfill \\ -\frac{1}{2}\sum _{i,j=1}^{m}\sum _{k=1}^{n}{y}_{i}\left(t\right){y}_{j}\left(t\right)\frac{\partial {h}_{i}}{\partial {x}_{k}}\left(x\right)\right)u\left(t,x\right),\hfill \\ u\left(0,x\right)={\sigma }_{0}\left(x\right).\hfill \end{array}$
(56)

This is called the robust DMZ equation. Here Δ is the Laplacian. The solution of a PDE when the initial condition is a delta function is called the fundamental solution.

### A.4 The Yau Equation

Recently, it was proved that the real-time solution of the general nonlinear filtering problem can be obtained reliably [13, 26]. Let $\mathcal{P}$ = {τ0 <τ1 < <τ k = τ} be a partition of the time interval [τ0, τ], and let the norm of the partition ${\mathcal{P}}_{k}$ be defined as |${\mathcal{P}}_{k}$| = sup1≤ik{|τ i - τi-1|}. If u l (t, x) satisfies the equation

$\left\{\begin{array}{l}\frac{\partial {u}_{l}}{\partial t}\left(t,x\right)=\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}{u}_{l}}{\partial {x}_{i}^{2}}\left(t,x\right)+\sum _{i=1}^{n}\left(-{f}_{i}\left(x\right)+\sum _{j=1}^{m}{y}_{j}\left({\tau }_{l}\right)\frac{\partial {h}_{j}}{\partial {x}_{i}}\left(x\right)\right)\frac{\partial {u}_{l}}{\partial {x}_{i}}\left(t,x\right)\hfill \\ -\left(\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\right)+\frac{1}{2}\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right)-\frac{1}{2}\sum _{i=1}^{m}{y}_{i}\left({\tau }_{l}\right)\Delta {h}_{i}\left(x\right)+\sum _{i=1}^{m}\sum _{j=1}^{n}{y}_{i}\left({\tau }_{l}\right){f}_{j}\left(x\right)\frac{\partial {h}_{i}}{\partial {x}_{j}}\left(x\right)\\ -\frac{1}{2}\sum _{i,j=1}^{m}\sum _{k=1}^{n}{y}_{i}\left({\tau }_{l}\right){y}_{j}\left({\tau }_{l}\right)\frac{\partial {h}_{i}}{\partial {x}_{k}}\left(x\right)\frac{\partial {h}_{j}}{\partial {x}_{k}}\left(x\right)\right){u}_{l}\left(t,x\right),\\ {u}_{l}\left({\tau }_{l-1},x\right)={u}_{l-1}\left({\tau }_{l-1},x\right),\hfill \end{array}$
(57)

in the time interval τl-1 tτ l , then the function ${\stackrel{˜}{u}}_{l}$(t, x) defined as

${\stackrel{˜}{u}}_{l}\left(t,x\right)=\mathrm{exp}\left(\sum _{i=1}^{m}{y}_{i}\left({\tau }_{l}\right){h}_{i}\left(x\right)\right){u}_{l}\left(t,x\right)$
(58)

satisfies the parabolic partial differential equation

$\begin{array}{c}\frac{\partial {\stackrel{˜}{u}}_{l}}{\partial t}\left(t,x\right)=\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}{\stackrel{˜}{u}}_{l}}{\partial {x}_{i}^{2}}\left(t,x\right)-\sum _{i=1}^{n}{f}_{i}\left(x\right)\frac{\partial {\stackrel{˜}{u}}_{l}}{\partial {x}_{i}}\left(t,x\right)-\left(\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\right)+\frac{1}{2}\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right)\right){\stackrel{˜}{u}}_{l}\left(t,x\right),\\ =-\sum _{i=1}^{n}\frac{\partial }{\partial {x}_{i}}\left[{f}_{i}\left(x\right){\stackrel{˜}{u}}_{l}\left(t,x\right)\right]+\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}{\stackrel{˜}{u}}_{l}}{\partial {x}_{i}^{2}}\left(t,x\right)-\frac{1}{2}\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right){\stackrel{˜}{u}}_{l}\left(t,x\right),\end{array}$
(59)

in the same time interval. The converse of the statement is also true. In , it was also noted that it is sufficient to use the previous observation, i.e., u l (t, x) satisfies Equation 57 if and only if ${\stackrel{˜}{u}}_{l}$(t, x) defined as

${\stackrel{˜}{u}}_{l}\left(t,x\right)=\mathrm{exp}\left(\sum _{i=1}^{m}{y}_{i}\left({\tau }_{l-1}\right){h}_{i}\left(x\right)\right){u}_{l}\left(t,x\right)$
(60)

satisfies Equation 59 in the time interval τl-1 tτ l . We refer to Equations 58 (60) and Equation 59 as the post-measurement (pre-measurement) forms of the YYe.

Observe that Equation 57 is obtained by setting y(t) to y(τ l ) in Equation 56. It was proved that the solution of Equation 57 approximates very well the solution of the robust DMZ equation (Equation 56), i.e., it converges to u(t, x) in both pointwise sense and L2 sense. Thus, solving Equation 56 is equivalent to solving Equation 59. Finally,

$\sigma \left(\tau ,x\right)=\underset{|{\mathcal{P}}_{k}|\to 0}{\mathrm{lim}}{\stackrel{˜}{u}}_{k}\left({\tau }_{k},x\right).$
(61)

Thus, the solution of the YYe (as |${\mathcal{P}}_{k}$| → 0) is the desired unnormalized conditional probability density.

Observe that when h(x) = 0, it is simply the FPKfe. However, unlike the FPKfe, the YYe does not satisfy the current conservation condition, i.e., the right-hand term is not a total divergence. This means that it does not conserve probability. This fundamental difference is traced to the fact that the FPKfe evolves the normalized probability density (and preserves the normalization), while the YYe evolves the unnormalized conditional probability density. Therefore, this distinction is made between the two equations in this paper.

### A.5 The Yau Algorithm

We may summarize the real-time algorithm, based on both the pre- and post-measurement forms of the YYe, of Yau as follows. Suppose measurements are available at times

<τ0 <τ1 <τ2 < <τ k = τ.

We seek the solution u i (t, x), which is the solution of the robust DMZ equation. Let the initial distribution be u(τ0, x) = σ0 (x). Then, evolve the initial distribution to the first measurement instant, τ1, using the YYe:

$\left\{\begin{array}{c}\frac{\partial {\stackrel{˜}{u}}_{1}}{\partial t}\left(t,x\right)=\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}{\stackrel{˜}{u}}_{1}}{\partial {x}_{i}^{2}}\left(t,x\right)-\sum _{i=1}^{n}{f}_{i}\left(x\right)\frac{\partial {\stackrel{˜}{u}}_{1}}{\partial {x}_{i}}\left(t,x\right)-\left(\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\right)+\frac{1}{2}\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right)\right){\stackrel{˜}{u}}_{1}\left(t,x\right),\\ {\stackrel{˜}{u}}_{1}\left({\tau }_{0},x\right)={\sigma }_{0}\left(x\right)\left\{\begin{array}{ll}\mathrm{exp}\left({\sum }_{j=1}^{m}\left[{y}_{j}\left({\tau }_{0}\right)-{y}_{j}\left({\tau }_{-1}\right)\right]{h}_{j}\left(x\right)\right)\hfill & \left(\text{Pre-Measurement}\right)\hfill \\ \mathrm{exp}\left({\sum }_{j=1}^{m}\left[{y}_{j}\left({\tau }_{1}\right)-{y}_{j}\left({\tau }_{0}\right)\right]{h}_{j}\left(x\right)\right)\hfill & \left(\text{Post-Measurement}\right).\hfill \end{array}\end{array}$
(63)

The solution of equation 63 at time τ1 is ${\stackrel{˜}{u}}_{1}$(τ1, x). Note that u1 (τ1, x) is given by

${u}_{1}\left({\tau }_{1},x\right)={\stackrel{˜}{u}}_{1}\left({\tau }_{1},x\right)\left\{\begin{array}{ll}\mathrm{exp}\left(-{\sum }_{i=1}^{m}{y}_{i}\left({\tau }_{0}\right){h}_{i}\left(x\right)\right)\hfill & \left(\text{Pre-Measurement}\right)\hfill \\ \mathrm{exp}\left(-{\sum }_{i=1}^{m}{y}_{i}\left({\tau }_{1}\right){h}_{i}\left(x\right)\right)\hfill & \left(\text{Post-Measurement}\right).\hfill \end{array}$
(64)

Next, solve the YYe to the next measurement instant τ2 with initial condition ${\stackrel{˜}{u}}_{2}$(τ1, x), i.e.,

$\left\{\begin{array}{c}\frac{\partial {\stackrel{˜}{u}}_{2}}{\partial t}\left(t,x\right)=\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}{\stackrel{˜}{u}}_{2}}{\partial {x}_{i}^{2}}\left(t,x\right)-\sum _{i=1}^{n}{f}_{i}\left(x\right)\frac{\partial {\stackrel{˜}{u}}_{2}}{\partial {x}_{i}}\left(t,x\right)-\left(\sum _{i=1}^{n}\frac{\partial {f}_{i}}{\partial {x}_{i}}\left(x\right)+\frac{1}{2}\sum _{i=1}^{m}{h}_{i}^{2}\left(x\right)\right){\stackrel{˜}{u}}_{2}\left(t,x\right),\\ {\stackrel{˜}{u}}_{2}\left({\tau }_{1},x\right)={\stackrel{˜}{u}}_{1}\left({\tau }_{1},x\right)\left\{\begin{array}{ll}\mathrm{exp}\left({\sum }_{j=1}^{m}\left({y}_{j}\left({\tau }_{1}\right)-{y}_{j}\left({\tau }_{0}\right)\right){h}_{j}\left(x\right)\right)\hfill & \left(\text{Pre-Measurement}\right)\hfill \\ \mathrm{exp}\left({\sum }_{j=1}^{m}\left({y}_{j}\left({\tau }_{2}\right)-{y}_{j}\left({\tau }_{1}\right)\right){h}_{j}\left(x\right)\right)\hfill & \left(\text{Post-Measurement}\right),\hfill \end{array}\end{array}$
(65)

to obtain ${\stackrel{˜}{u}}_{2}$(τ2, x). In fact, for i ≥ 2, u i (τ i , x) can be computed from ${\stackrel{˜}{u}}_{i}$(τ i , x), where ${\stackrel{˜}{u}}_{i}$(t, x) satisfies the equation

$\left\{\begin{array}{c}\frac{\partial {\stackrel{˜}{u}}_{i}}{\partial t}\left(t,x\right)=\frac{1}{2}\sum _{i=1}^{n}\frac{{\partial }^{2}\stackrel{˜}{u}}{\partial {x}_{i}^{2}}\left(t,x\right)-\sum _{j=1}^{n}{f}_{j}\left(x\right)\frac{\partial \stackrel{˜}{u}}{\partial {x}_{j}}\left(t,x\right)-\left(\sum _{j=1}^{n}\frac{\partial {f}_{j}}{\partial {x}_{j}}\left(x\right)+\frac{1}{2}\sum _{j=1}^{m}{h}_{j}^{2}\left(x\right)\right){\stackrel{˜}{u}}_{i}\left(t,x\right),\\ {\stackrel{˜}{u}}_{i}\left({\tau }_{i-1},x\right)={\stackrel{˜}{u}}_{i-1}\left({\tau }_{i-1},x\right)\left\{\begin{array}{ll}\mathrm{exp}\left({\sum }_{j=1}^{m}\left({y}_{j}\left({\tau }_{i-1}\right)-{y}_{j}\left({\tau }_{i-2}\right)\right){h}_{j}\left(x\right)\right)\hfill & \left(\text{Pre-Measurement}\right)\hfill \\ \mathrm{exp}\left({\sum }_{j=1}^{m}\left({y}_{j}\left({\tau }_{i}\right)-{y}_{j}\left({\tau }_{i-1}\right)\right){h}_{j}\left(x\right)\right)\hfill & \left(\text{Post-Measurement}\right).\hfill \end{array}\end{array}$
(66)

The initial condition in Equation 66 follows from noting that (since u i (τi-1, x) = ui-1 (τi-1, x))

$\begin{array}{c}{\stackrel{\sim }{u}}_{i}\left({\tau }_{i-1},x\right)={u}_{i}\left({\tau }_{i-1},x\right)\mathrm{exp}\left(\sum _{j=1}^{m}{y}_{j}\left({\tau }_{i}\right){h}_{j}\left(x\right)\right),\\ \begin{array}{c}=\mathrm{exp}\left(-\sum _{j=1}^{m}{y}_{j}\left({\tau }_{i-1}\right){h}_{j}\left(x\right)\right){\stackrel{\sim }{u}}_{}\end{array}\end{array}$