$$ \newcommand{\fk}[1]{\mathfrak{#1}}\newcommand{\wh}[1]{\widehat{#1}} \newcommand{\br}[1]{\left\langle#1\right\rangle} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\qp}[1]{\left(#1\right)}\newcommand{\qb}[1]{\left[#1\right]} \renewcommand{\Im}{\mathbf{Im}}\newcommand{\I}{\mathbf{Id}}\newcommand{\Id}{\mathbf{I}}\renewcommand{\ker}{\mathbf{ker}}\newcommand{\supp}[1]{\mathbf{supp}(#1)}\renewcommand{\tr}[1]{\mathrm{tr}\left(#1\right)} \renewcommand{\norm}[1]{\left\lVert #1 \right\rVert}\renewcommand{\abs}[1]{\left| #1 \right|} \newcommand{\lb}{\left\{} \newcommand{\rb}{\right\}}\newcommand{\zl}{\left[}\newcommand{\zr}{\right]}\newcommand{\U}{_}\renewcommand{\star}{*} \newcommand{\A}{\mathbb{A}}\newcommand{\C}{\mathbb{C}}\newcommand{\E}{\mathbb{E}}\newcommand{\F}{\mathbb{F}}\newcommand{\II}{\mathbb{I}}\newcommand{\K}{\mathbb{K}}\newcommand{\LL}{\mathbb{L}}\newcommand{\M}{\mathbb{M}}\newcommand{\N}{\mathbb{N}}\newcommand{\PP}{\mathbb{P}}\newcommand{\Q}{\mathbb{Q}}\newcommand{\R}{\mathbb{R}}\newcommand{\T}{\mathbb{T}}\newcommand{\W}{\mathbb{W}}\newcommand{\Z}{\mathbb{Z}} \newcommand{\Aa}{\mathcal{A}}\newcommand{\Bb}{\mathcal{B}}\newcommand{\Cc}{\mathcal{C}}\newcommand{\Dd}{\mathcal{D}}\newcommand{\Ee}{\mathcal{E}}\newcommand{\Ff}{\mathcal{F}}\newcommand{\Gg}{\mathcal{G}}\newcommand{\Hh}{\mathcal{H}}\newcommand{\Kk}{\mathcal{K}}\newcommand{\Ll}{\mathcal{L}}\newcommand{\Mm}{\mathcal{M}}\newcommand{\Nn}{\mathcal{N}}\newcommand{\Pp}{\mathcal{P}}\newcommand{\Qq}{\mathcal{Q}}\newcommand{\Rr}{\mathcal{R}}\newcommand{\Ss}{\mathcal{S}}\newcommand{\Tt}{\mathcal{T}}\newcommand{\Uu}{\mathcal{U}}\newcommand{\Ww}{\mathcal{W}}\newcommand{\XX}{\mathcal {X}}\newcommand{\Zz}{\mathcal{Z}} \renewcommand{\d}{\,\mathrm{d}} \newcommand\restr[2]{\left.#1\right|_{#2}} $$

Study of densities part 2

Part 6 of the series on Malliavin calculus

By L.Llamazares-Elias

Three line summary

  • Solutions to SDEs of the form $dX=b(X) d t +\sigma (X)dW$ are Malliavin differentiable if $b,\sigma \in C^1({\mathbb R})$.

  • Their Malliavin differential $DX$ can be written as a stochastic integral.

  • This gives us an SDE linear in $DX$ and can be solved exactly to obtain an explicit expression for $DX$.

Notation

The same as in the other posts of this series. In particular, we recall the notation $\mathbb{L}^2(I\times\Omega)$ for the set of progressively measurable square integrable stochastic processes. Furthermore, given a stochastic process $X$ such that $X(t)\in \mathbb{D}^{1,2}$ for each $t\in I$ we write $D\U rX(t)$ for the Malliavin differential at time $r\in I$ of $X(t)$. That is, if

$$\label{ce} X(t)=\sum\U {n=0}^{\infty} I\U n(f\U n(\cdot ,t)),\quad f\U n(\cdot ,t)\in L^2(S\U n).$$

is the chaos expansion of $X(t)$ for each $t$ then we have that

$$\label{ced} D\U rX(t)=\sum\U {n=0}^{\infty} nI\U {n-1}(f\U n(\cdot ,r,t)) , \quad\forall r,t\in I.$$

Introduction

As anticipated in the summary, we will be working with an SDE of the form

$$\label{SDE} dX=b(X) d t +\sigma (X)dW .$$

It is a classical result of the theory of SDEs that, if $b$ and $\sigma$ are Lipschitz continuous, then the above equation has a unique solution for each initial data $X\U 0\in L^2(\Omega)$. That is, there exists a unique continuous adapted process $X\in \mathbb{L}^2(I\times\Omega)$ such that

$$\label{X expression} X(t)=X(0)+\int\U {0}^t b(X(s)) ds+\int\U {0}^t \sigma(X(s)) dW(s).$$

Our goal will be to obtain an explicit expression for the derivative of $X$. We will do so by directly differentiating in the expression above. As a result, we will need two lemmas that tell us how to differentiate each of the above integrals. The first of these is as follows.

Lemma 1. If $X\in \mathbb{L}^2(I\times\Omega)$ is Malliavin differentiable for almost all $t$. Then $\int\U {0}^t X(s) dW(s)$ is Malliavin differentiable and we have that

$$D\U r \int\U {0}^t X(s) dW(s)=X(r)+\int\U {r}^t X(s) dW(s), \quad\forall r\leq t.$$

Proof. Suppose that $D\U tX$ is progressively measurable. Then, using the previously studied divergence property and the fact that the Skorohod integral is an extension of the Itô integral gives

$$\begin{aligned} D\U r \int\U {0}^t X(s) dW(s) & =D\U r(\delta X1\U {[0,t]}) =X(r)1\U {[0,t]}(r)+\delta (D\U rX1\U {[0,t]}) \\&=X(r)+\int\U {0}^t D\U r(X(s)) dW(s). \end{aligned}$$

We consider the chaos expansion of $X$. Then, as was seen previously, we have that

$$f\U n(t\U 1,\ldots,t\U n,t)=0,\quad\forall t\leq\max\U {i=1,\ldots,n} t\U i .$$

So, writing the chaos expansion for $D\U rX(s)$ gives

$$D\U rX(s)=\sum\U {n=0}^{\infty} nI\U {n-1}(f\U n(\cdot ,r,s))=0, \quad\forall r>t.$$

Substituting in the first equation we derived shows that

$$D\U r \int\U {0}^t X(s) dW(s)=X(r)+\int\U {r}^t X(s) dW(s).$$

As a result, we only need to show that $D\U r X$ is progressively measurable for all $r<t$. This follows by some knowledge of how the Malliavin differential works with conditional expectations. We haven’t covered this so we refer the reader to 1 page 34. ◻

Our second lemma shows how to differentiate deterministic integrals. In this case, we need a stronger condition than $D\U rX(t)$ existing for each fixed $t$.

Lemma 2. Let $X(s)\in \mathbb{D}^{1,2}$ be Malliavin differentiable for each $s\in I$ with

$$\int\U {I} \norm{D\U rX}\U {L^2(I\times\Omega)}^2dr$$

Then, given $h\in L^2(I)$ it holds that

$$D\U t\left\langle X,h\right\rangle\U {L^2(I)}=\left\langle D\U tX,h\right\rangle\U {L^2(I)}.$$

Proof. We will apply Fubini, we have that

$$\left\langle D\U rX,h\right\rangle\U {L^2(I)}=\int\U I\sum\U {n=0}^{\infty}nI\U {n-1}(f\U n(\cdot ,r,s))ds=\sum\U {n=0}^{\infty}nI\U {n-1}\left(\int\U If\U n(\cdot ,r,s)h(s)ds\right).$$

Where both Fubini and the commutation of the sum and the integrals are justified by the condition of the lemma, which guarantees that the last sum has finite $L^2(I\times\Omega)$ norm as

$$\begin{gathered} \int\U {I} \norm{\sum\U {n=0}^{\infty}nI\U {n-1}\left(\int\U If\U n(\cdot ,r,s)h(s)ds\right)}\U {L^2(\Omega)}^2d r \\ \leq \int\U {I}\sum\U {n=0}^{\infty}n^2 \left(\int\U \mathbb{R}\norm{I\U {n-1}f\U n(\cdot ,r,s)}^2\U {L^2(\Omega)}d r\right)ds \norm{h}^2\U {L^2(I)}\\ =\norm{h}^2\U {L^2(I)}\int\U {I}\sum\U {n=0}^{\infty}\norm{D\U tX}\U {L^2(I\times\Omega)}d t<\infty . \end{gathered}$$

Where in the first inequality we applied Fubini, Cauchy Schwartz, and the triangle inequality, and in the equality, we used our old calculation of the norm of the Malliavin derivative

$$\norm{D\U rX}\U {L^2(I\times\Omega)}^2=\sum\U {n=0}^{\infty} n!n\|f\U n(\cdot ,r )\|\U {L^2(I^{n+1})}<\infty.$$

The result now follows by noting that, by the linearity of the iterated integrals, it holds that the terms

$$\int\U If\U n(\cdot ,s)h(s)ds$$

Is the chaos expansion of $\left\langle X,h\right\rangle\U {L^2(I)}$. ◻

In particular, by setting $h=1\U {[0,t]}$, this shows that

$$D\U r \int\U {0}^t X(s)ds=\int\U {0}^t D\U rX(s) ds.$$

That is, we can commute the derivative with deterministic integrals. The previous two lemmas together with the chain rule show that, if we take $X\U 0\in {\mathbb R}$, and the solution to our SDE verifies all necessary conditions, then

$$D\U rX(t)=\sigma(X\U r)+\int\U {r}^tb'(X(s))D\U rX(s) ds+\int\U {r}^t \sigma'(X(s))D\U rX(s) dW(s) .$$

Proposition 1. Given our SDE with $\sigma ,b\in C^1\U b({\mathbb R})$ it holds that there exists a unique solution $X\U t$ and for all $r\leq t$ we have

$$\begin{gathered} D\U rX(t)=\sigma(X\U r)+\int\U {r}^tb'(X(s))D\U rX(s) ds+\int\U {r}^t \sigma'(X(s))DX(s) dW(s) . \end{gathered}$$

Proof. The proof is quite technical and we merely sketch it. The full detail in 2 page 120. By the previous discussion, it is only necessary to show that $X$ verifies the conditions of the lemma, i.e. is Malliavin differentiable and its differential verifies that

$$\int\U \mathbb{R}\norm{D\U tX}\U {L^2(I\times\Omega)} d t<\infty.$$

This is proved by a Picard iteration

$$X\U {n+1}=x\U 0+\int\U {0}^t b(X\U n(s)) ds+\int\U {0}^t\sigma (X\U n(s)) dW(s).$$

The aim is to prove that $X\U n$ are differentiable with

$$\norm{D\U rX\U n}\U {L^2(I\times\Omega)}^2<\infty , \quad\forall r\in I, \quad\forall n\in \mathbb{N}.$$

For the case $n=0$ this is clear as we have that

$$D\U rX\U 1(t)=D\U r[x\U 0+b(x\U 0)t+\sigma (x\U 0) W(t)]=\sigma(x\U 0)+1\U {[r,t]}.$$

For the general case, the condition of the Lemma 1 is a consequence of the hypothesis of induction on $X\U n$ and the chain rule. Verifying the conditions of Lemma 2 (and in fact stronger bounds on the supremum of $X$) can be done using the Burkholder–David–Gundy inequality. Once that is done, one can prove through a discrete version of Gronwall’s inequality that $D\U rX\U n$ are bounded uniformly in $n$. Since we know by classical theory of SDEs that $X\U n\to X\in L^2(I\times\Omega)$ this is sufficient to show that

$$\lim\U {n \to \infty}D\U rX\U n=D\U rX.$$

Completing the proof. ◻

We now show how to obtain an explicit expression for $D\U rX$ by using that the equation verified by $D\U rX$ is linear (in $D\U rX$ as opposed to $X$). Doing so uses a generalized version of Ito’s formula for stochastic coefficients.

Theorem 1. Let $b,\sigma \in C^1\U b(I)$ and $X$ verify the SDE

$$X(t)=b(X(t))d t +\sigma(X(t)) dW(t).$$

Then $X(t)$ is Malliavin differentiable on $[0,t]$ with

$$D\U r X\U t=\sigma\left(X\U t\right) \exp \left(\int\U r^t\left(b\left(X\U s\right)-\frac{1}{2}\left(\sigma^{\prime}\right)^2\left(X\U s\right)\right) d s+\int\U r^t \sigma^{\prime}\left(X\U s\right) d W(s)\right).$$

Proof. Let us fix any $r\leq t$ and set. Then

$$Y\U r(s):=D\U rX(s);\quad u(s):=b'(X(s));\quad v(s):=\sigma'(X(s))$$

Then we have that, since $b’,\sigma ‘$ are bounded, $u\in \mathbb{L}^1([0,t]\times\Omega),v\in \mathbb{L}^2([0,t]\times\Omega)$ and for each fixed $r\in {\mathbb R}$ it holds that

$$Y\U r(s)=Y\U r(r)+\int\U {r}^t u(s)Y\U r(s) ds+\int\U {r}^t v(s)Y\U r(s) dW(s) .$$

Where we define $Y\U r(0):=\sigma (X\U r)$. Symbolically we have the family of linear SDEs starting at time $r$

$$dY\U r(s)=u(s)Y\U r(s) ds+v(s)Y\U r(s) dW(s);\quad Y\U r(r)=\sigma(X(r)).$$

Consider

$$Z(t):=\int\U r^t\left(u(t)-\frac{1}{2}v^2(s)\right) d s+\int\U r^t v(s) d W(s)$$

Which solves the differential equation

$$dZ=\left(u-\frac{1}{2}v^2\right)d t+v(s)dW(s) .$$

Applying Itô to $g(z):=e^z$ gives

$$d g(Z)=g'dZ+\frac{1}{2}g''v^2d t= e^Z[(u-\frac{1}{2}v^2+\frac{1}{2}v^2)d t+vdW]\\= g(Z)(u d t +vdW).$$

Setting $Y\U r=Y\U r(r)g(Z)$ proves the result by the uniqueness of solutions as both sides verify the same SDE (note that $Y\U r(r)g(Z)$ has the same stochastic differential as $g(Z)$ but now takes initial data $Y\U r(r)$). ◻

We end this post by noting that Proposition $1$ has a multidimensional generalization which can also be found in Nualart’s book 2, on page 119.

Share: X (Twitter) Facebook LinkedIn