MC 6: Study of densities part 2

Malliavin Calculus
Part 6 of the series on Malliavin calculus
Author

Liam Llamazares–Elias

Published

Invalid Date

1 Three line summary

  • Solutions to SDEs of the form \(dX=b(X) d t +\sigma (X)dW\) are Malliavin differentiable if \(b,\sigma \in C^1(\R)\).

  • Their Malliavin differential \(DX\) can be written as a stochastic integral.

  • This gives us an SDE linear in \(DX\) and can be solved exactly to obtain an explicit expression for \(DX\).

2 Notation

The same as in the other posts of this series. In particular, we recall the notation \(\mathbb{L}^2(I\times\Omega)\) for the set of progressively measurable square integrable stochastic processes. Furthermore, given a stochastic process \(X\) such that \(X(t)\in \mathbb{D}^{1,2}\) for each \(t\in I\) we write \(D_rX(t)\) for the Malliavin differential at time \(r\in I\) of \(X(t)\). That is, if

\[ X(t)=\sum_{n=0}^{\infty} I_n(f_n(\cdot ,t)),\quad f_n(\cdot ,t)\in L^2(S_n). \tag{1}\]

is the chaos expansion of \(X(t)\) for each \(t\) then we have that

\[ D_rX(t)=\sum_{n=0}^{\infty} nI_{n-1}(f_n(\cdot ,r,t)) , \quad\forall r,t\in I. \tag{2}\]

3 Introduction

As anticipated in the summary, we will be working with an SDE of the form

\[ dX=b(X) d t +\sigma (X)dW . \tag{3}\]

It is a classical result of the theory of SDEs that, if \(b\) and \(\sigma\) are Lipschitz continuous, then the above equation has a unique solution for each initial data \(X_0\in L^2(\Omega)\). That is, there exists a unique continuous adapted process \(X\in \mathbb{L}^2(I\times\Omega)\) such that

\[ X(t)=X(0)+\int_{0}^t b(X(s)) ds+\int_{0}^t \sigma(X(s)) dW(s). \tag{4}\]

Our goal will be to obtain an explicit expression for the derivative of \(X\). We will do so by directly differentiating in the expression above. As a result, we will need two lemmas that tell us how to differentiate each of the above integrals. The first of these is as follows.

Lemma 1 If \(X\in \mathbb{L}^2(I\times\Omega)\) is Malliavin differentiable for almost all \(t\). Then \(\int_{0}^t X(s) dW(s)\) is Malliavin differentiable and we have that

\[ D_r \int_{0}^t X(s) dW(s)=X(r)+\int_{r}^t D_r X(s) dW(s), \quad\forall r\leq t. \]

Proof. Suppose that \(D_tX\) is progressively measurable. Then, using the previously studied divergence property and the fact that the Skorokhod integral is an extension of the Itô integral gives

\[ \begin{aligned} D_r \int_{0}^t X(s) dW(s) & =D_r(\delta X1_{[0,t]}) =X(r)1_{[0,t]}(r)+\delta (D_rX1_{[0,t]}) \\&=X(r)+\int_{0}^t D_r(X(s)) dW(s). \end{aligned} \]

We consider the chaos expansion of \(X\). Then, as was seen previously, we have that

\[ f_n(t_1,\ldots,t_n,t)=0,\quad\forall t\leq\max_{i=1,\ldots,n} t_i . \]

So, writing the chaos expansion for \(D_rX(s)\) gives

\[ D_rX(s)=\sum_{n=0}^{\infty} nI_{n-1}(f_n(\cdot ,r,s))=0, \quad\forall r>t. \]

Substituting in the first equation we derived shows that

\[ D_r \int_{0}^t X(s) dW(s)=X(r)+\int_{r}^t X(s) dW(s). \]

As a result, we only need to show that \(D_r X\) is progressively measurable for all \(r<t\). This follows by some knowledge of how the Malliavin differential works with conditional expectations. We haven’t covered this so we refer the reader to (Nunno, Øksendal, and Proske 2008) page 34.

Our second lemma shows how to differentiate deterministic integrals. In this case, we need a stronger condition than \(D_rX(t)\) existing for each fixed \(t\).

Lemma 2 Let \(X(s)\in \mathbb{D}^{1,2}\) be Malliavin differentiable for each \(s\in I\) with

\[ \int_{I} \norm{D_rX}_{L^2(I\times\Omega)}^2dr<\infty. \]

Then, given \(h\in L^2(I)\) it holds that

\[ D_r\br{X,h}_{L^2(I)}=\br{D_rX,h}_{L^2(I)}. \]

Proof. We will apply Fubini, we have that

\[ \br{D_rX,h}_{L^2(I)}=\int_I\sum_{n=0}^{\infty}nI_{n-1}(f_n(\cdot ,r,s))ds=\sum_{n=0}^{\infty}nI_{n-1}\left(\int_If_n(\cdot ,r,s)h(s)ds\right). \]

Where both Fubini and the commutation of the sum and the integrals are justified by the condition of the lemma, which guarantees that the last sum has finite \(L^2(I\times\Omega)\) norm as

\[ \begin{aligned} &\int_{I} \norm{\sum_{n=0}^{\infty}nI_{n-1}\left(\int_If_n(\cdot ,r,s)h(s)ds\right)}_{L^2(\Omega)}^2d r \\ &\leq \int_{I}\sum_{n=0}^{\infty}n^2 \left(\int_{\R}\norm{I_{n-1}f_n(\cdot ,r,s)}^2_{L^2(\Omega)}d r\right)ds \norm{h}^2_{L^2(I)}\\ &=\norm{h}^2_{L^2(I)}\int_{I}\sum_{n=0}^{\infty}\norm{D_tX}_{L^2(I\times\Omega)}d t<\infty . \end{aligned} \]

Where in the first inequality we applied Fubini, Cauchy Schwartz, and the triangle inequality, and in the equality, we used our old calculation of the norm of the Malliavin derivative

\[ \norm{D_rX}_{L^2(I\times\Omega)}^2=\sum_{n=0}^{\infty} n!n\|f_n(\cdot ,r )\|_{L^2(I^{n+1})}<\infty. \]

The result now follows by noting that, by the linearity of the iterated integrals, it holds that the terms

\[ \int_If_n(\cdot ,s)h(s)ds \]

Is the chaos expansion of \(\br{X,h}_{L^2(I)}\).

In particular, by setting \(h=1_{[0,t]}\), this shows that

\[ D_r \int_{0}^t X(s)ds=\int_{0}^t D_rX(s) ds. \]

That is, we can commute the derivative with deterministic integrals. The previous two lemmas together with the chain rule show that, if we take \(X_0\in \R\), and the solution to our SDE verifies all necessary conditions, then

\[ D_rX(t)=\sigma(X_r)+\int_{r}^tb'(X(s))D_rX(s) ds+\int_{r}^t \sigma'(X(s))D_rX(s) dW(s) . \]

Proposition 1 Given our SDE (3) with \(\sigma ,b\in C^1_b(\R)\) it holds that there exists a unique solution \(X_t\) and for all \(r\leq t\) we have

\[ \begin{aligned} D_rX(t)=\sigma(X_r)+\int_{r}^tb'(X(s))D_rX(s) ds+\int_{r}^t \sigma'(X(s))D_rX(s) dW(s) . \end{aligned} \]

Proof. The proof is quite technical and we merely sketch it. The full detail in (Nunno, Øksendal, and Proske 2008) page 120. By the previous discussion, it is only necessary to show that \(X\) verifies the conditions of the lemma, i.e. is Malliavin differentiable and its differential verifies that

\[ \int_{I}\norm{D_rX}_{L^2(I\times\Omega)} d r<\infty. \]

This is proved by a Picard iteration

\[ X_{n+1}=x_0+\int_{0}^t b(X_n(s)) ds+\int_{0}^t\sigma (X_n(s)) dW(s). \]

The aim is to prove that \(X_n\) are differentiable with

\[ \norm{D_rX_n}_{L^2(I\times\Omega)}^2<\infty , \quad\forall r\in I, \quad\forall n\in \N. \]

For the case \(n=0\) this is clear as we have that

\[ D_rX_1(t)=D_r[x_0+b(x_0)t+\sigma (x_0) W(t)]=\sigma(x_0)1_{[r,t]}. \]

For the general case, the condition of the Lemma 1 is a consequence of the hypothesis of induction on \(X_n\) and the chain rule. Verifying the conditions of Lemma 2 (and in fact stronger bounds on the supremum of \(X\)) can be done using the Burkholder–David–Gundy inequality. Once that is done, one can prove through a discrete version of Gronwall’s inequality that \(D_rX_n\) are bounded uniformly in \(n\). Since we know by classical theory of SDEs that \(X_n\to X\in L^2(I\times\Omega)\) this is sufficient to show that
\[ \lim_{n \to \infty}D_rX_n=D_rX. \]

Completing the proof.

We now show how to obtain an explicit expression for \(D_rX\) by using that the equation verified by \(D_rX\) is linear (in \(D_rX\) as opposed to \(X\)). Doing so uses a generalized version of Ito’s formula for stochastic coefficients.

Theorem 1 Let \(b,\sigma \in C^1_b(I)\) and \(X\) verify the SDE

\[ X(t)=b(X(t))d t +\sigma(X(t)) dW(t). \]

Then \(X(t)\) is Malliavin differentiable on \([0,t]\) with

\[ D_r X_t=\sigma\left(X_r\right) \exp \left(\int_r^t\left(b'\left(X_s\right)-\frac{1}{2}\left(\sigma^{\prime}\right)^2\left(X_s\right)\right) d s+\int_r^t \sigma^{\prime}\left(X_s\right) d W(s)\right). \]

Proof. Let us fix any \(r\leq t\) and set. Then

\[ Y_r(s):=D_rX(s);\quad u(s):=b'(X(s));\quad v(s):=\sigma'(X(s)) \]

Then we have that, since \(b',\sigma '\) are bounded, \(u\in \mathbb{L}^1([0,t]\times\Omega),v\in \mathbb{L}^2([0,t]\times\Omega)\) and for each fixed \(r\in \R\) it holds that

\[ Y_r(s)=Y_r(r)+\int_{r}^t u(s)Y_r(s) ds+\int_{r}^t v(s)Y_r(s) dW(s) . \]

Where we define \(Y_r(r):=\sigma (X_r)\). Symbolically we have the family of linear SDEs starting at time \(r\)

\[ dY_r(s)=u(s)Y_r(s) ds+v(s)Y_r(s) dW(s);\quad Y_r(r)=\sigma(X(r)). \]

Consider

\[ Z(t) := \int_r^t\left(u(s)-\frac{1}{2}v^2(s)\right) d s + \int_r^t v(s) d W(s) \]

Which solves the differential equation

\[ dZ=\left(u-\frac{1}{2}v^2\right)d t+v(s)dW(s) . \]

Applying Itô to \(g(z):=e^z\) gives

\[ dg(Z) = g'dZ + \frac{1}{2}g”v^2 ds = e^Z\left[\left(u-\frac{1}{2}v^2+\frac{1}{2}v^2\right)ds + v dW(s)\right] = g(Z)(u ds + v dW(s)) \]

Setting \(Y_r=Y_r(r)g(Z)\) proves the result by the uniqueness of solutions as both sides verify the same SDE (note that \(Y_r(r)g(Z)\) has the same stochastic differential as \(g(Z)\) but now takes initial data \(Y_r(r)\)).

We end this post by noting that Proposition 1 has a multidimensional generalization which can also be found in Nualart’s book (Nualart and Nualart 2018), on page 119.

4 References

Nualart, David, and Eulalia Nualart. 2018. Introduction to Malliavin Calculus. Vol. 9. Cambridge University Press. https://books.google.co.uk/books?hl=zh-CN&lr=&id=l_1uDwAAQBAJ&oi=fnd&pg=PR11&dq=nualart+introduction+malliavin&ots=_JuMhMkTMt&sig=Tx5y00u4kMNs73jLtMEs-kyXAuU&redir_esc=y#v=onepage&q=nualart\%20introduction\%20malliavin&f=false.
Nunno, Giulia Di, Bernt Øksendal, and Frank Proske. 2008. Malliavin Calculus for lévy Processes with Applications to Finance. Springer. https://link.springer.com/book/10.1007/978-3-540-78572-9.

Comments

Tip: Supports LaTeX using $$. See formatting examples.
To edit or delete, click the timestamp (e.g., "5 minutes ago").