-
Notifications
You must be signed in to change notification settings - Fork 378
Description
I came across this repo as i try to understand in detail how the generation of sampels from a Brownian motion works with python.
A question arised as i took a look into the Brownian-motion-with-Python.ipynb Notebook. The gen_normal() method defined in the Brownian class is there defined as follows:
def gen_normal(self,n_step=100):
"""
Generate motion by drawing from the Normal distribution
Arguments:
n_step: Number of steps
Returns:
A NumPy array with `n_steps` points
"""
if n_step < 30:
print("WARNING! The number of steps is small. It may not generate a good stochastic process sequence!")
w = np.ones(n_step)*self.x0
for i in range(1,n_step):
# Sampling from the Normal distribution
yi = np.random.normal()
# Weiner process
w[i] = w[i-1]+(yi/np.sqrt(n_step))
return w
Why are we scaling the sampels from the normal distribution yi
by the factor 1/np.sqrt(n_step)
? I am wondering, since in other exampels that show the sampling from a geometric brownian motion the increments of the wiener process are defined by np.random.normal(0, np.sqrt(dt), size=(len(sigma), n)).T
and thus the scaling is refering to the increment of the time step (see the example given in Wikipedia ). In this code it appears that the scaling is defined by the total number of time steps.