Stochastic Calculus Notes 3: Brownian Motion

This?note of stochastic calculus covers Brownian motion and its basic properties.

1  Random Walks

1.1  Symmetric random walk

1.1.1  Definition

Let \omega=\omega_1\omega_2\omega_3\cdots be a successive outcomes of the fair coin toss experiment (\mathbb{P}(H)=\mathbb{P}(T)=\frac{1}{2}). Let’s define a random variable

    \[X_j=\left\{\begin{tabular}{ll}1 & \text{if }\omega_j=H\\ -1?& \text{if }\omega_j=T\\ \end{tabular} \right.\]

and define M_0=0, then we can construct following process M_k,k=0,1,2,\cdots, which is called a symmetric random walk.

    \[M_k=\sum^k_{j=1}X_j,\,k=1,2,\cdots\]

With each toss, this process either steps up one unit or down one unit, and each of the two possibilities is equally likely.

Read more Stochastic Calculus Notes 3: Brownian Motion

Stochastic Calculus Notes 2: Information and Conditioning

This note of stochastic calculus covers mathematical descriptions of information and conditioning.

1  Information

1.1  Background

In dynamic hedging, the position of underlying security at each future time is contigent on how the uncertainty between the present time and that future time is resolved. In order to make contingency plans, we need a way to mathematically model the information on which future decisions can be based.

A discrete example:

  • We imagine some random experiment is performed. We denote \Omega as the set of all possible outcomes, and denote \omega as one particular outcome.
  • Suppose we are given some information. Such information is not enough to tell the exact value of \omega, but is enough for us to narrow down the possibilities. Specifically, based on such given information, we can construct a list of sets of outcomes, called the sets resolved by the information, within which we know what sets are sure to contain \omega and what other sets are sure not to contain it.
  • At each time, the list of sets resolved from the given information form a σ-algebra, so we obtain a series of σ-sigma, \mathcal{F}_0,\mathcal{F}_1,\mathcal{F}_2,\cdots, indexed by time. As time moves forward, we obtain finer resolution, meaning that \mathcal{F}_{t_2} contains more information than \mathcal{F}_{t_1} for any t_1<t_2.
  • Above collection of σ-algebra, \mathcal{F}_0,\mathcal{F}_1,\mathcal{F}_2,\cdots, is an example of a filtration.

Below is the definition of infiltration in the continuous-time sense. Read more Stochastic Calculus Notes 2: Information and Conditioning

Stochastic Calculus Notes 1: General Probability Theory

This note of stochastic calculus covers general probability theory.

1  Infinite probability spaces

1.1  Definition: σ-algebra

Let \Omega\, be a non-empty set, and \mathcal{F}\, be a collection of subsets of \Omega. We say \mathcal{F}\,  is a \sigma-algebra or \sigma-field if:

  1. The empty set \emptyset\in\mathcal{F}
  2. For any set A\in\mathcal{F}\,\Longrightarrow\,A^c\in\mathcal{F}
  3. Whenever a sequence of sets A_1,A_2,\cdots\in\mathcal{F}\,\Longrightarrow\,\bigcup_{n=1}^\infty A_n\in\mathcal{F}.

Note:

  • If we have a \sigma-algebra of sets, then all operations of the sets will give other sets that are still in the \sigma-algebra.
  • The whole space \Omega\in\mathcal{F}\, since \Omega=\emptyset^c.

Read more Stochastic Calculus Notes 1: General Probability Theory

R in Time Series: Linear Regression With Harmonic Seasonality

This tutorial talks about linear regression with harmonic seasonality.

1  Underlying mathematics

In regression modeling with seasonality, we can use one parameter for each season. For instance, 12 parameters for 12 months in one year. However, seasonal effects often vary smoothly over the seasons, so that it may be more parameter-efficient to use a smooth function instead of separate indices. Sine and cosine functions can be used to build smooth variationinto a seasonal model. Read more R in Time Series: Linear Regression With Harmonic Seasonality

R in Time Series: Linear Regression with Seasonal Variables

This tutorial gives a short introduction about linear regression with seasonal variables.

A time series are observations measured sequentially in time, seasonal effects are often present in the data, especially annual cycles caused directly or indirectly by the Earth’s movement around the sun. Here we will present linear regression model with additive seasonal indicator variables included.

Suppose a time series contains s seasons. For example

  • For time series measured over each calendar month, s = 12.
  • For time series measured in six-month intevals, corresponding to summer and winter, s = 2.

Read more R in Time Series: Linear Regression with Seasonal Variables

Autocorrelation Affects Regression on Time Series

This post talks about how autocorrelation affects regressions on time series.

Time series regression usually differs from a standard regression analysis because the residuals form a time series and therefore tend to be serially correlated.

  • When the residual correlation is positive, the estimated standard deviation of the parameter estimates, read from the computer output of a standard regression analysis, will tend to be less than their true value. \(\,\Longrightarrow\,\) This will lead to erroneously high statistical significance being attributed to statistical tests in standard computer output. In other words, the obtained p values will be smaller than they should be.

Read more Autocorrelation Affects Regression on Time Series

R in Time Series: White Noise and Random Walk

This tutorial introduces white noise and random walk.

1  White Noise

1.1  Motivation

When we fit mathematical models to time series data, if the model captured most of the deterministic features of the time series, the residual error series should appear to be a realization of independent random variable from some probability distribution. Due to this criteria of judging how good a model is in fitting given data, it seems natural to build models up from a model of independent randon variation, known as discrete white noise.

Read more R in Time Series: White Noise and Random Walk

Mathematics Underlying Principal Component Analysis

This tutorial covers mathematics underlying principal component analysis, including definition of PCs, how to find PCs, and derivation of PCs.

1  Introduction to Principal Component Analysis

1.1  Main idea of Principal Component Analysis

The central idea of principal component analysis (PCA) is to reduce the dimensionality of a data set consisting of a large number of interrelated variables, while retaining as much as possible of the variation present in the data set.

This is achieved by transforming the original set of variables to a new set of variables, the principal components (PCs), which are uncorrelated, and which are ordered so that the first a few retain most of the variation present in all the original variables. Read more Mathematics Underlying Principal Component Analysis