Easy Way to Calculate Inverse Gaussian in C

Family of continuous probability distributions

Inverse Gaussian

Probability density function

Inverse Gaussian Probability Densitiy Function.svg

Cumulative distribution function

Inverse Gaussian Cumulative Distribution Function.svg
Notation IG ( μ , λ ) {\displaystyle \operatorname {IG} \left(\mu ,\lambda \right)}
Parameters μ > 0 {\displaystyle \mu >0}
λ > 0 {\displaystyle \lambda >0}
Support x ( 0 , ) {\displaystyle x\in (0,\infty )}
PDF λ 2 π x 3 exp [ λ ( x μ ) 2 2 μ 2 x ] {\displaystyle {\sqrt {\frac {\lambda }{2\pi x^{3}}}}\exp \left[-{\frac {\lambda (x-\mu )^{2}}{2\mu ^{2}x}}\right]}
CDF

Φ ( λ x ( x μ 1 ) ) {\displaystyle \Phi \left({\sqrt {\frac {\lambda }{x}}}\left({\frac {x}{\mu }}-1\right)\right)} + exp ( 2 λ μ ) Φ ( λ x ( x μ + 1 ) ) {\displaystyle {}+\exp \left({\frac {2\lambda }{\mu }}\right)\Phi \left(-{\sqrt {\frac {\lambda }{x}}}\left({\frac {x}{\mu }}+1\right)\right)}

where Φ {\displaystyle \Phi } is the standard normal (standard Gaussian) distribution c.d.f.
Mean

E [ X ] = μ {\displaystyle \operatorname {E} [X]=\mu }

E [ 1 X ] = 1 μ + 1 λ {\displaystyle \operatorname {E} [{\frac {1}{X}}]={\frac {1}{\mu }}+{\frac {1}{\lambda }}}
Mode μ [ ( 1 + 9 μ 2 4 λ 2 ) 1 2 3 μ 2 λ ] {\displaystyle \mu \left[\left(1+{\frac {9\mu ^{2}}{4\lambda ^{2}}}\right)^{\frac {1}{2}}-{\frac {3\mu }{2\lambda }}\right]}
Variance

Var [ X ] = μ 3 λ {\displaystyle \operatorname {Var} [X]={\frac {\mu ^{3}}{\lambda }}}

Var [ 1 X ] = 1 μ λ + 2 λ 2 {\displaystyle \operatorname {Var} [{\frac {1}{X}}]={\frac {1}{\mu \lambda }}+{\frac {2}{\lambda ^{2}}}}
Skewness 3 ( μ λ ) 1 / 2 {\displaystyle 3\left({\frac {\mu }{\lambda }}\right)^{1/2}}
Ex. kurtosis 15 μ λ {\displaystyle {\frac {15\mu }{\lambda }}}
MGF exp [ λ μ ( 1 1 2 μ 2 t λ ) ] {\displaystyle \exp \left[{{\frac {\lambda }{\mu }}\left(1-{\sqrt {1-{\frac {2\mu ^{2}t}{\lambda }}}}\right)}\right]}
CF exp [ λ μ ( 1 1 2 μ 2 i t λ ) ] {\displaystyle \exp \left[{{\frac {\lambda }{\mu }}\left(1-{\sqrt {1-{\frac {2\mu ^{2}\mathrm {i} t}{\lambda }}}}\right)}\right]}

In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞).

Its probability density function is given by

f ( x ; μ , λ ) = λ 2 π x 3 exp ( λ ( x μ ) 2 2 μ 2 x ) {\displaystyle f(x;\mu ,\lambda )={\sqrt {\frac {\lambda }{2\pi x^{3}}}}\exp {\biggl (}-{\frac {\lambda (x-\mu )^{2}}{2\mu ^{2}x}}{\biggr )}}

for x > 0, where μ > 0 {\displaystyle \mu >0} is the mean and λ > 0 {\displaystyle \lambda >0} is the shape parameter.[1]

The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading: it is an "inverse" only in that, while the Gaussian describes a Brownian motion's level at a fixed time, the inverse Gaussian describes the distribution of the time a Brownian motion with positive drift takes to reach a fixed positive level.

Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable.

To indicate that a random variable X is inverse Gaussian-distributed with mean μ and shape parameter λ we write X IG ( μ , λ ) {\displaystyle X\sim \operatorname {IG} (\mu ,\lambda )\,\!} .

Properties [edit]

Single parameter form [edit]

The probability density function (pdf) of the inverse Gaussian distribution has a single parameter form given by

f ( x ; μ , μ 2 ) = μ 2 π x 3 exp ( ( x μ ) 2 2 x ) . {\displaystyle f(x;\mu ,\mu ^{2})={\frac {\mu }{\sqrt {2\pi x^{3}}}}\exp {\biggl (}-{\frac {(x-\mu )^{2}}{2x}}{\biggr )}.}

In this form, the mean and variance of the distribution are equal, E [ X ] = Var ( X ) . {\displaystyle \mathbb {E} [X]={\text{Var}}(X).}

Also, the cumulative distribution function (cdf) of the single parameter inverse Gaussian distribution is related to the standard normal distribution by

Pr ( X < x ) = Φ ( z 1 ) + e μ Φ ( z 2 ) , for 0 < x μ , Pr ( X > x ) = Φ ( z 1 ) e μ Φ ( z 2 ) , for x μ . {\displaystyle {\begin{aligned}\Pr(X<x)&=\Phi (z_{1})+e^{\mu }\Phi (z_{2}),&{\text{for}}&\quad 0<x\leq \mu ,\\\Pr(X>x)&=\Phi (-z_{1})-e^{\mu }\Phi (z_{2}),&{\text{for}}&\quad x\geq \mu .\end{aligned}}}

where z 1 = μ x 1 / 2 x 1 / 2 {\displaystyle z_{1}={\frac {\mu }{x^{1/2}}}-x^{1/2}} , z 2 = μ x 1 / 2 + x 1 / 2 , {\displaystyle z_{2}={\frac {\mu }{x^{1/2}}}+x^{1/2},} and the Φ {\displaystyle \Phi } is the cdf of standard normal distribution. The variables z 1 {\displaystyle z_{1}} and z 2 {\displaystyle z_{2}} are related to each other by the identity z 2 2 = z 1 2 + 4 μ . {\displaystyle z_{2}^{2}=z_{1}^{2}+4\mu .}

In the single parameter form, the MGF simplifies to

M ( t ) = exp [ μ ( 1 1 2 t ) ] . {\displaystyle M(t)=\exp[\mu (1-{\sqrt {1-2t}})].}

An inverse Gaussian distribution in double parameter form f ( x ; μ , λ ) {\displaystyle f(x;\mu ,\lambda )} can be transformed into a single parameter form f ( y ; μ 0 , μ 0 2 ) {\displaystyle f(y;\mu _{0},\mu _{0}^{2})} by appropriate scaling y = μ 2 x λ , {\displaystyle y={\frac {\mu ^{2}x}{\lambda }},} where μ 0 = μ 3 / λ . {\displaystyle \mu _{0}=\mu ^{3}/\lambda .}

The standard form of inverse Gaussian distribution is

f ( x ; 1 , 1 ) = 1 2 π x 3 exp ( ( x 1 ) 2 2 x ) . {\displaystyle f(x;1,1)={\frac {1}{\sqrt {2\pi x^{3}}}}\exp {\biggl (}-{\frac {(x-1)^{2}}{2x}}{\biggr )}.}

Summation [edit]

If X i has an IG ( μ 0 w i , λ 0 w i 2 ) {\displaystyle \operatorname {IG} (\mu _{0}w_{i},\lambda _{0}w_{i}^{2})\,\!} distribution for i = 1, 2, ...,n and all X i are independent, then

S = i = 1 n X i IG ( μ 0 w i , λ 0 ( w i ) 2 ) . {\displaystyle S=\sum _{i=1}^{n}X_{i}\sim \operatorname {IG} \left(\mu _{0}\sum w_{i},\lambda _{0}\left(\sum w_{i}\right)^{2}\right).}

Note that

Var ( X i ) E ( X i ) = μ 0 2 w i 2 λ 0 w i 2 = μ 0 2 λ 0 {\displaystyle {\frac {\operatorname {Var} (X_{i})}{\operatorname {E} (X_{i})}}={\frac {\mu _{0}^{2}w_{i}^{2}}{\lambda _{0}w_{i}^{2}}}={\frac {\mu _{0}^{2}}{\lambda _{0}}}}

is constant for all i. This is a necessary condition for the summation. Otherwise S would not be Inverse Gaussian distributed.

Scaling [edit]

For any t > 0 it holds that

X IG ( μ , λ ) t X IG ( t μ , t λ ) . {\displaystyle X\sim \operatorname {IG} (\mu ,\lambda )\,\,\,\,\,\,\Rightarrow \,\,\,\,\,\,tX\sim \operatorname {IG} (t\mu ,t\lambda ).}

Exponential family [edit]

The inverse Gaussian distribution is a two-parameter exponential family with natural parameters −λ/(2μ 2) and −λ/2, and natural statistics X and 1/X.

Relationship with Brownian motion [edit]

Let the stochastic process X t be given by

X 0 = 0 {\displaystyle X_{0}=0\quad }
X t = ν t + σ W t {\displaystyle X_{t}=\nu t+\sigma W_{t}\quad \quad \quad \quad }

where W t is a standard Brownian motion. That is, X t is a Brownian motion with drift ν > 0 {\displaystyle \nu >0} .

Then the first passage time for a fixed level α > 0 {\displaystyle \alpha >0} by X t is distributed according to an inverse-Gaussian:

T α = inf { t > 0 X t = α } IG ( α ν , ( α σ ) 2 ) = α σ 2 π x 3 exp ( ( α ν x ) 2 2 σ 2 x ) {\displaystyle T_{\alpha }=\inf\{t>0\mid X_{t}=\alpha \}\sim \operatorname {IG} \left({\frac {\alpha }{\nu }},\left({\frac {\alpha }{\sigma }}\right)^{2}\right)={\frac {\alpha }{\sigma {\sqrt {2\pi x^{3}}}}}\exp {\biggl (}-{\frac {(\alpha -\nu x)^{2}}{2\sigma ^{2}x}}{\biggr )}}

i.e

P ( T α ( T , T + d T ) ) = α σ 2 π T 3 exp ( ( α ν T ) 2 2 σ 2 T ) d T {\displaystyle P(T_{\alpha }\in (T,T+dT))={\frac {\alpha }{\sigma {\sqrt {2\pi T^{3}}}}}\exp {\biggl (}-{\frac {(\alpha -\nu T)^{2}}{2\sigma ^{2}T}}{\biggr )}dT}

(cf. Schrödinger[2] equation 19, Smoluchowski[3], equation 8, and Folks[4], equation 1).

Derivation of the first passage time distribution

Suppose that we have a Brownian motion X t {\displaystyle X_{t}} with drift ν {\displaystyle \nu } defined by:

X t = ν t + σ W t , X ( 0 ) = x 0 {\displaystyle X_{t}=\nu t+\sigma W_{t},\quad X(0)=x_{0}}

And suppose that we wish to find the probability density function for the time when the process first hits some barrier α > x 0 {\displaystyle \alpha >x_{0}} - known as the first passage time. The Fokker-Planck equation describing the evolution of the probability distribution p ( t , x ) {\displaystyle p(t,x)} is:

p t + ν p x = 1 2 σ 2 2 p x 2 , { p ( 0 , x ) = δ ( x x 0 ) p ( t , α ) = 0 {\displaystyle {\partial p \over {\partial t}}+\nu {\partial p \over {\partial x}}={1 \over {2}}\sigma ^{2}{\partial ^{2}p \over {\partial x^{2}}},\quad {\begin{cases}p(0,x)&=\delta (x-x_{0})\\p(t,\alpha )&=0\end{cases}}}

where δ ( ) {\displaystyle \delta (\cdot )} is the Dirac delta function. This is a boundary value problem (BVP) with a single absorbing boundary condition p ( t , α ) = 0 {\displaystyle p(t,\alpha )=0} , which may be solved using the method of images. Based on the initial condition, the fundamental solution to the Fokker-Planck equation, denoted by φ ( t , x ) {\displaystyle \varphi (t,x)} , is:

φ ( t , x ) = 1 2 π σ 2 t exp [ ( x x 0 ν t ) 2 2 σ 2 t ] {\displaystyle \varphi (t,x)={1 \over {\sqrt {2\pi \sigma ^{2}t}}}\exp \left[-{(x-x_{0}-\nu t)^{2} \over {2\sigma ^{2}t}}\right]}

Define a point m {\displaystyle m} , such that m > α {\displaystyle m>\alpha } . This will allow the original and mirror solutions to cancel out exactly at the barrier at each instant in time. This implies that the initial condition should be augmented to become:

p ( 0 , x ) = δ ( x x 0 ) A δ ( x m ) {\displaystyle p(0,x)=\delta (x-x_{0})-A\delta (x-m)}

where A {\displaystyle A} is a constant. Due to the linearity of the BVP, the solution to the Fokker-Planck equation with this initial condition is:

p ( t , x ) = 1 2 π σ 2 t { exp [ ( x x 0 ν t ) 2 2 σ 2 t ] A exp [ ( x m ν t ) 2 2 σ 2 t ] } {\displaystyle p(t,x)={1 \over {\sqrt {2\pi \sigma ^{2}t}}}\left\{\exp \left[-{(x-x_{0}-\nu t)^{2} \over {2\sigma ^{2}t}}\right]-A\exp \left[-{(x-m-\nu t)^{2} \over {2\sigma ^{2}t}}\right]\right\}}

Now we must determine the value of A {\displaystyle A} . The fully absorbing boundary condition implies that:

( α x 0 ν t ) 2 = 2 σ 2 t log A + ( α m ν t ) 2 {\displaystyle (\alpha -x_{0}-\nu t)^{2}=-2\sigma ^{2}t\log A+(\alpha -m-\nu t)^{2}}

At p ( 0 , α ) {\displaystyle p(0,\alpha )} , we have that ( α x 0 ) 2 = ( α m ) 2 m = 2 α x 0 {\displaystyle (\alpha -x_{0})^{2}=(\alpha -m)^{2}\implies m=2\alpha -x_{0}} . Substituting this back into the above equation, we find that:

A = e 2 ν ( α x 0 ) / σ 2 {\displaystyle A=e^{2\nu (\alpha -x_{0})/\sigma ^{2}}}

Therefore, the full solution to the BVP is:

p ( t , x ) = 1 2 π σ 2 t { exp [ ( x x 0 ν t ) 2 2 σ 2 t ] e 2 ν ( α x 0 ) / σ 2 exp [ ( x + x 0 2 α ν t ) 2 2 σ 2 t ] } {\displaystyle p(t,x)={1 \over {\sqrt {2\pi \sigma ^{2}t}}}\left\{\exp \left[-{(x-x_{0}-\nu t)^{2} \over {2\sigma ^{2}t}}\right]-e^{2\nu (\alpha -x_{0})/\sigma ^{2}}\exp \left[-{(x+x_{0}-2\alpha -\nu t)^{2} \over {2\sigma ^{2}t}}\right]\right\}}

Now that we have the full probability density function, we are ready to find the first passage time distribution f ( t ) {\displaystyle f(t)} . The simplest route is to first compute the survival function S ( t ) {\displaystyle S(t)} , which is defined as:

S ( t ) = α p ( t , x ) d x = Φ ( α x 0 ν t σ t ) e 2 ν ( α x 0 ) / σ 2 Φ ( α + x 0 ν t σ t ) {\displaystyle {\begin{aligned}S(t)&=\int _{-\infty }^{\alpha }p(t,x)dx\\&=\Phi \left({\alpha -x_{0}-\nu t \over {\sigma {\sqrt {t}}}}\right)-e^{2\nu (\alpha -x_{0})/\sigma ^{2}}\Phi \left({-\alpha +x_{0}-\nu t \over {\sigma {\sqrt {t}}}}\right)\end{aligned}}}

where Φ ( ) {\displaystyle \Phi (\cdot )} is the cumulative distribution function of the standard normal distribution. The survival function gives us the probability that the Brownian motion process has not crossed the barrier α {\displaystyle \alpha } at some time t {\displaystyle t} . Finally, the first passage time distribution f ( t ) {\displaystyle f(t)} is obtained from the identity:

f ( t ) = d S d t = ( α x 0 ) 2 π σ 2 t 3 e ( α x 0 ν t ) 2 / 2 σ 2 t {\displaystyle {\begin{aligned}f(t)&=-{dS \over {dt}}\\&={(\alpha -x_{0}) \over {\sqrt {2\pi \sigma ^{2}t^{3}}}}e^{-(\alpha -x_{0}-\nu t)^{2}/2\sigma ^{2}t}\end{aligned}}}

Assuming that x 0 = 0 {\displaystyle x_{0}=0} , the first passage time follows an inverse Gaussian distribution:

f ( t ) = α 2 π σ 2 t 3 e ( α ν t ) 2 / 2 σ 2 t IG [ α ν , ( α σ ) 2 ] {\displaystyle f(t)={\alpha \over {\sqrt {2\pi \sigma ^{2}t^{3}}}}e^{-(\alpha -\nu t)^{2}/2\sigma ^{2}t}\sim {\text{IG}}\left[{\alpha \over {\nu }},\left({\alpha \over {\sigma }}\right)^{2}\right]}

When drift is zero [edit]

A common special case of the above arises when the Brownian motion has no drift. In that case, parameter μ tends to infinity, and the first passage time for fixed level α has probability density function

f ( x ; 0 , ( α σ ) 2 ) = α σ 2 π x 3 exp ( α 2 2 σ 2 x ) {\displaystyle f\left(x;0,\left({\frac {\alpha }{\sigma }}\right)^{2}\right)={\frac {\alpha }{\sigma {\sqrt {2\pi x^{3}}}}}\exp \left(-{\frac {\alpha ^{2}}{2\sigma ^{2}x}}\right)}

(see also Bachelier[5] : 74 [6] : 39 ). This is a Lévy distribution with parameters c = ( α σ ) 2 {\displaystyle c=\left({\frac {\alpha }{\sigma }}\right)^{2}} and μ = 0 {\displaystyle \mu =0} .

Maximum likelihood [edit]

The model where

X i IG ( μ , λ w i ) , i = 1 , 2 , , n {\displaystyle X_{i}\sim \operatorname {IG} (\mu ,\lambda w_{i}),\,\,\,\,\,\,i=1,2,\ldots ,n}

with all w i known, (μ,λ) unknown and all X i independent has the following likelihood function

L ( μ , λ ) = ( λ 2 π ) n 2 ( i = 1 n w i X i 3 ) 1 2 exp ( λ μ i = 1 n w i λ 2 μ 2 i = 1 n w i X i λ 2 i = 1 n w i 1 X i ) . {\displaystyle L(\mu ,\lambda )=\left({\frac {\lambda }{2\pi }}\right)^{\frac {n}{2}}\left(\prod _{i=1}^{n}{\frac {w_{i}}{X_{i}^{3}}}\right)^{\frac {1}{2}}\exp \left({\frac {\lambda }{\mu }}\sum _{i=1}^{n}w_{i}-{\frac {\lambda }{2\mu ^{2}}}\sum _{i=1}^{n}w_{i}X_{i}-{\frac {\lambda }{2}}\sum _{i=1}^{n}w_{i}{\frac {1}{X_{i}}}\right).}

Solving the likelihood equation yields the following maximum likelihood estimates

μ ^ = i = 1 n w i X i i = 1 n w i , 1 λ ^ = 1 n i = 1 n w i ( 1 X i 1 μ ^ ) . {\displaystyle {\widehat {\mu }}={\frac {\sum _{i=1}^{n}w_{i}X_{i}}{\sum _{i=1}^{n}w_{i}}},\,\,\,\,\,\,\,\,{\frac {1}{\widehat {\lambda }}}={\frac {1}{n}}\sum _{i=1}^{n}w_{i}\left({\frac {1}{X_{i}}}-{\frac {1}{\widehat {\mu }}}\right).}

μ ^ {\displaystyle {\widehat {\mu }}} and λ ^ {\displaystyle {\widehat {\lambda }}} are independent and

μ ^ IG ( μ , λ i = 1 n w i ) , n λ ^ 1 λ χ n 1 2 . {\displaystyle {\widehat {\mu }}\sim \operatorname {IG} \left(\mu ,\lambda \sum _{i=1}^{n}w_{i}\right),\qquad {\frac {n}{\widehat {\lambda }}}\sim {\frac {1}{\lambda }}\chi _{n-1}^{2}.}

Sampling from an inverse-Gaussian distribution [edit]

The following algorithm may be used.[7]

Generate a random variate from a normal distribution with mean 0 and standard deviation equal 1

ν N ( 0 , 1 ) . {\displaystyle \displaystyle \nu \sim N(0,1).}

Square the value

y = ν 2 {\displaystyle \displaystyle y=\nu ^{2}}

and use the relation

x = μ + μ 2 y 2 λ μ 2 λ 4 μ λ y + μ 2 y 2 . {\displaystyle x=\mu +{\frac {\mu ^{2}y}{2\lambda }}-{\frac {\mu }{2\lambda }}{\sqrt {4\mu \lambda y+\mu ^{2}y^{2}}}.}

Generate another random variate, this time sampled from a uniform distribution between 0 and 1

z U ( 0 , 1 ) . {\displaystyle \displaystyle z\sim U(0,1).}

If z μ μ + x {\displaystyle z\leq {\frac {\mu }{\mu +x}}} then return x {\displaystyle \displaystyle x} else return μ 2 x . {\displaystyle {\frac {\mu ^{2}}{x}}.}

Sample code in Java:

                        public            double            inverseGaussian            (            double            mu            ,            double            lambda            )            {            Random            rand            =            new            Random            ();            double            v            =            rand            .            nextGaussian            ();            // Sample from a normal distribution with a mean of 0 and 1 standard deviation            double            y            =            v            *            v            ;            double            x            =            mu            +            (            mu            *            mu            *            y            )            /            (            2            *            lambda            )            -            (            mu            /            (            2            *            lambda            ))            *            Math            .            sqrt            (            4            *            mu            *            lambda            *            y            +            mu            *            mu            *            y            *            y            );            double            test            =            rand            .            nextDouble            ();            // Sample from a uniform distribution between 0 and 1            if            (            test            <=            (            mu            )            /            (            mu            +            x            ))            return            x            ;            else            return            (            mu            *            mu            )            /            x            ;            }          

Wald distribution using Python with aid of matplotlib and NumPy

And to plot Wald distribution in Python using matplotlib and NumPy:

                        import            matplotlib.pyplot            as            plt            import            numpy            as            np            h            =            plt            .            hist            (            np            .            random            .            wald            (            3            ,            2            ,            100000            ),            bins            =            200            ,            density            =            True            )            plt            .            show            ()          

[edit]

  • If X IG ( μ , λ ) {\displaystyle X\sim \operatorname {IG} (\mu ,\lambda )} , then k X IG ( k μ , k λ ) {\displaystyle kX\sim \operatorname {IG} (k\mu ,k\lambda )} for any number k > 0. {\displaystyle k>0.} [1]
  • If X i IG ( μ , λ ) {\displaystyle X_{i}\sim \operatorname {IG} (\mu ,\lambda )\,} then i = 1 n X i IG ( n μ , n 2 λ ) {\displaystyle \sum _{i=1}^{n}X_{i}\sim \operatorname {IG} (n\mu ,n^{2}\lambda )\,}
  • If X i IG ( μ , λ ) {\displaystyle X_{i}\sim \operatorname {IG} (\mu ,\lambda )\,} for i = 1 , , n {\displaystyle i=1,\ldots ,n\,} then X ¯ IG ( μ , n λ ) {\displaystyle {\bar {X}}\sim \operatorname {IG} (\mu ,n\lambda )\,}
  • If X i IG ( μ i , 2 μ i 2 ) {\displaystyle X_{i}\sim \operatorname {IG} (\mu _{i},2\mu _{i}^{2})\,} then i = 1 n X i IG ( i = 1 n μ i , 2 ( i = 1 n μ i ) 2 ) {\displaystyle \sum _{i=1}^{n}X_{i}\sim \operatorname {IG} \left(\sum _{i=1}^{n}\mu _{i},2\left(\sum _{i=1}^{n}\mu _{i}\right)^{2}\right)\,}
  • If X IG ( μ , λ ) {\displaystyle X\sim \operatorname {IG} (\mu ,\lambda )} , then λ ( X μ ) 2 / μ 2 X χ 2 ( 1 ) {\displaystyle \lambda (X-\mu )^{2}/\mu ^{2}X\sim \chi ^{2}(1)} .[8]

The convolution of an inverse Gaussian distribution (a Wald distribution) and an exponential (an ex-Wald distribution) is used as a model for response times in psychology,[9] with visual search as one example.[10]

History [edit]

This distribution appears to have been first derived in 1900 by Louis Bachelier[5] [6] as the time a stock reaches a certain price for the first time. In 1915 it was used independently by Erwin Schrödinger[2] and Marian v. Smoluchowski[3] as the time to first passage of a Brownian motion. In the field of reproduction modeling it is known as the Hadwiger function, after Hugo Hadwiger who described it in 1940.[11] Abraham Wald re-derived this distribution in 1944[12] as the limiting form of a sample in a sequential probability ratio test. The name inverse Gaussian was proposed by Maurice Tweedie in 1945.[13] Tweedie investigated this distribution in 1956[14] and 1957[15] [16] and established some of its statistical properties. The distribution was extensively reviewed by Folks and Chhikara in 1978.[4]

Numeric computation and software [edit]

Despite the simple formula for the probability density function, numerical probability calculations for the inverse Gaussian distribution nevertheless require special care to achieve full machine accuracy in floating point arithmetic for all parameter values.[17] Functions for the inverse Gaussian distribution are provided for the R programming language by several packages including rmutil,[18] [19] SuppDists,[20] STAR,[21] invGauss,[22] LaplacesDemon,[23] and statmod.[24]

See also [edit]

  • Generalized inverse Gaussian distribution
  • Tweedie distributions—The inverse Gaussian distribution is a member of the family of Tweedie exponential dispersion models
  • Stopping time

References [edit]

  1. ^ a b Chhikara, Raj S.; Folks, J. Leroy (1989), The Inverse Gaussian Distribution: Theory, Methodology and Applications, New York, NY, USA: Marcel Dekker, Inc, ISBN0-8247-7997-5
  2. ^ a b Schrödinger, Erwin (1915), "Zur Theorie der Fall- und Steigversuche an Teilchen mit Brownscher Bewegung" [On the Theory of Fall- and Rise Experiments on Particles with Brownian Motion], Physikalische Zeitschrift (in German), 16 (16): 289–295
  3. ^ a b Smoluchowski, Marian (1915), "Notiz über die Berechnung der Brownschen Molekularbewegung bei der Ehrenhaft-Millikanschen Versuchsanordnung" [Note on the Calculation of Brownian Molecular Motion in the Ehrenhaft-Millikan Experimental Set-up], Physikalische Zeitschrift (in German), 16 (17/18): 318–321
  4. ^ a b Folks, J. Leroy; Chhikara, Raj S. (1978), "The Inverse Gaussian Distribution and Its Statistical Application—A Review", Journal of the Royal Statistical Society, Series B (Methodological), 40 (3): 263–275, doi:10.1111/j.2517-6161.1978.tb01039.x, JSTOR 2984691
  5. ^ a b Bachelier, Louis (1900), "Théorie de la spéculation" [The Theory of Speculation] (PDF), Ann. Sci. Éc. Norm. Supér. (in French), Serie 3, 17: 21–89, doi:10.24033/asens.476
  6. ^ a b Bachelier, Louis (1900), "The Theory of Speculation", Ann. Sci. Éc. Norm. Supér., Serie 3, 17: 21–89 (Engl. translation by David R. May, 2011), doi:10.24033/asens.476
  7. ^ Michael, John R.; Schucany, William R.; Haas, Roy W. (1976), "Generating Random Variates Using Transformations with Multiple Roots", The American Statistician, 30 (2): 88–90, doi:10.1080/00031305.1976.10479147, JSTOR 2683801
  8. ^ Shuster, J. (1968). "On the inverse Gaussian distribution function". Journal of the American Statistical Association. 63 (4): 1514–1516. doi:10.1080/01621459.1968.10480942.
  9. ^ Schwarz, Wolfgang (2001), "The ex-Wald distribution as a descriptive model of response times", Behavior Research Methods, Instruments, and Computers, 33 (4): 457–469, doi:10.3758/bf03195403, PMID 11816448
  10. ^ Palmer, E. M.; Horowitz, T. S.; Torralba, A.; Wolfe, J. M. (2011). "What are the shapes of response time distributions in visual search?". Journal of Experimental Psychology: Human Perception and Performance. 37 (1): 58–71. doi:10.1037/a0020747. PMC3062635. PMID 21090905.
  11. ^ Hadwiger, H. (1940). "Eine analytische Reproduktionsfunktion für biologische Gesamtheiten". Skandinavisk Aktuarietidskrijt. 7 (3–4): 101–113. doi:10.1080/03461238.1940.10404802.
  12. ^ Wald, Abraham (1944), "On Cumulative Sums of Random Variables", Annals of Mathematical Statistics, 15 (3): 283–296, doi:10.1214/aoms/1177731235, JSTOR 2236250
  13. ^ Tweedie, M. C. K. (1945). "Inverse Statistical Variates". Nature. 155 (3937): 453. Bibcode:1945Natur.155..453T. doi:10.1038/155453a0. S2CID 4113244.
  14. ^ Tweedie, M. C. K. (1956). "Some Statistical Properties of Inverse Gaussian Distributions". Virginia Journal of Science. New Series. 7 (3): 160–165.
  15. ^ Tweedie, M. C. K. (1957). "Statistical Properties of Inverse Gaussian Distributions I". Annals of Mathematical Statistics. 28 (2): 362–377. doi:10.1214/aoms/1177706964. JSTOR 2237158.
  16. ^ Tweedie, M. C. K. (1957). "Statistical Properties of Inverse Gaussian Distributions II". Annals of Mathematical Statistics. 28 (3): 696–705. doi:10.1214/aoms/1177706881. JSTOR 2237229.
  17. ^ Giner, Göknur; Smyth, Gordon (August 2016). "statmod: Probability Calculations for the Inverse Gaussian Distribution". The R Journal. 8 (1): 339–351. arXiv:1603.06687. doi:10.32614/RJ-2016-024.
  18. ^ Lindsey, James (2013-09-09). "rmutil: Utilities for Nonlinear Regression and Repeated Measurements Models".
  19. ^ Swihart, Bruce; Lindsey, James (2019-03-04). "rmutil: Utilities for Nonlinear Regression and Repeated Measurements Models".
  20. ^ Wheeler, Robert (2016-09-23). "SuppDists: Supplementary Distributions".
  21. ^ Pouzat, Christophe (2015-02-19). "STAR: Spike Train Analysis with R".
  22. ^ Gjessing, Hakon K. (2014-03-29). "Threshold regression that fits the (randomized drift) inverse Gaussian distribution to survival data".
  23. ^ Hall, Byron; Hall, Martina; Statisticat, LLC; Brown, Eric; Hermanson, Richard; Charpentier, Emmanuel; Heck, Daniel; Laurent, Stephane; Gronau, Quentin F.; Singmann, Henrik (2014-03-29). "LaplacesDemon: Complete Environment for Bayesian Inference".
  24. ^ Giner, Göknur; Smyth, Gordon (2017-06-18). "statmod: Statistical Modeling".

Further reading [edit]

  • Høyland, Arnljot; Rausand, Marvin (1994). System Reliability Theory. New York: Wiley. ISBN978-0-471-59397-3.
  • Seshadri, V. (1993). The Inverse Gaussian Distribution. Oxford University Press. ISBN978-0-19-852243-0.

External links [edit]

  • Inverse Gaussian Distribution in Wolfram website.

wernerhicessell.blogspot.com

Source: https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution

0 Response to "Easy Way to Calculate Inverse Gaussian in C"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel