An algorithm for computing the inverse normal cumulative distribution function
The following describes an algorithm for computing the inverse normal cumulative distribution function where the relative error has an absolute value less than 1.15 × 10−9 in the entire region. References to other algorithms are included.
- Overview
- The inverse normal cumulative distribution function
- The algorithm
- Refining the result to full machine precision
- Computer implementations
- Other algorithms
- F.A.Q.
- Recent changes to this page
- References
Overview
Some years ago, I was searching for a free, fast, and accurate way of computing the inverse normal cumulative distribution function. The search gave no satisfactory results, so I decided to write an algorithm of my own. This is the result – an algorithm with a relative error less than 1.15 × 10−9 in the entire region.
Later I found some other algorithms for computing the inverse of the normal cumulative distribution function. One of these is more accurate than this algorithm. All algorithms that I know of are listed in the section Other algorithms below.
Feel free to use my algorithm any way you want. The algorithm has been implemented in several computer languages, so before you write your own implementation, have a look at the section Computer implementations. The implementations there are free for everyone to use.
If you have any questions or comments about anything on this web page, take a look at the section Frequently asked questions or let me know. My email address is at the bottom of this page.
The inverse normal cumulative distribution function
The inverse normal cumulative distribution is a non-linear function for which no closed-form solution exists. The function is continuous, monotonically increasing, infinitely differentiable, and maps the open interval (0,1) to the whole real line.

Figure 1: The inverse normal cumulative distribution function.
The algorithm
An overview of the algorithm
The algorithm uses the fact that if F -1(p) is the inverse normal cumulative distribution function, then G -1(p) = F -1(p+1/2) is an odd function. The algorithm uses two separate rational minimax approximations. One rational approximation is used for the central region and another one is used for the tails.
The central region is when 0.02425 <= p <= 1 - 0.02425 = 0.97575. In this region a rational minimax approximation is used directly.
In the tails, when p < 0.02425 or p > 1 - 0.02425 = 0.97575, the argument is first passed through a non-linear tranformation, to make the function more linear
, before a rational minimax approximation is applied.
Quality of the algorithm
To put it in a few simple words: The absolute value of the relative error is less than 1.15 × 10−9 in the entire region. For more details, read on.
In the following, let p be the probability (the input to the algorithm), let x be the corresponding quantile (the desired output), and let x approx be the approximate value of x returned by the algorithm.
The output of the algorithm, x approx, has a relative error compared to the true quantile x given by (x approx - x) / x. This relative error has an absolute value which is less than 1.15 × 10−9 whenever x >= -38.
What happens when x < -38 is not really a practical problem. First of all, the probability of a random normal variable being less than -38 is approximately 2.885428351 × 10−316. For all practical purposes, this probability is identical to zero. Events with such a small probability of occurring simply don’t happen at all. Secondly, assuming that we are using IEEE double precision arithmetic, which is a valid assumption on most computers these days, numbers less than 2−1022 (approximately 2.225073859 × 10−308) are so-called sub-normal or denormal numbers. Numbers in this range can be represented, but not to full precision. So one can’t expect a relative error as small as 1.15 × 10−9 for output values in this range < -38 simply because the input can’t even be represented with such precision.
Error plots
The plots below show the relative error (x approx - x) / x as a function of the desired value x.
The values in the plots were obtained as follows.
- A large number of p-values were selected in each of the three regions, the lower region [0, 0.02425], the central region [0.02425, 0.97575], and upper region [0.97575, 1].
- For each p-value, p(i), the algorithm was used to produce an approximation x approx(i) of the desired value x(i).
- Each approximation x approx(i) was then refined to full double precision giving x(i). The refinement was done with an iterative procedure based on Halley’s method. The procedure requires an algorithm for computing the normal cumulative distribution function to full double precision. This was obtained by expressing the normal cumulative distribution function in terms of the complementary error function, 1/2 erfc(-x/sqrt(2)), and using W. J. Cody’s FORTRAN implementations of the error function and scaled and unscaled complementary error function, found at http://www.netlib.org/specfun/erf.
- Each pair of values, consisting of the approximation x approx(i) and the corresponding refined value x(i), was used to compute the relative error (x approx(i) - x(i)) / x(i). This relative error was then plotted against x(i), for all i in each range.
Error plot for the central region
Figure 2 shows the relative error in the central region, when p is in the interval [0.02425, 0.97575], which approximately corresponds to when x is in the interval [-1.97296,1.97296]. The figure clearly shows that the algorithm is using a minimax approximation of the relative error. The red lines are drawn at ± 1.15 × 10−9.
The plot is based on evaluating the function in 1 million points.

Figure 2: Relative error in the central region.
Error plot for the lower region
Figure 3 shows the relative error in the lower region, which is when p is smaller than 0.02425 and when x is larger than -38, which is when p is in the interval [2.885428351 × 10−316, 0.02425], which corresponds to when x is in the interval [-38,-1.97296]. The figure clearly shows that the algorithm is using a minimax approximation of the relative error. The red lines are drawn at ± 1.15 × 10−9.
The left green line is drawn at approximately x = -38.44939, which is the quantile corresponding to the exact probability p = 2−1073 (approximately 9.88131 × 10−324). This probability is the smallest number which is larger than zero that can be represented with IEEE double precision arithmetic (i.e., it is the smallest denormalized number). At the left of this point the probability can not be distinguished from zero.
The right green line is drawn at approximately x = -37.51938, which is the quantile corresponding to the exact probability p = (1 - 2−51) × 2−1022 (approximately 2.22507 × 10−308). This probability is the smallest number which is larger than zero that can be represented with full IEEE double precision arithmetic (i.e., it is the smallest normalized number).
The plot is based on evaluating the function in 1 million points.

Figure 3: Relative error in the lower region. The red lines are drawn at ± 1.15 × 10−9
Error plot for the upper region
Figure 4 shows the relative error in the upper region, which is when p is larger than 0.97575, which approximately corresponds to when x is larger than 1.97296. The figure clearly shows that the algorithm is using a minimax approximation of the relative error. The red lines are drawn at ± 1.15 × 10−9.
The left green line is drawn at approximately x = 8.20954, which is the quantile corresponding to the exact probability p = 1-2−53. This probability is the largest number which is smaller than one that can be represented with IEEE double precision arithmetic.
The right green line is drawn at approximately x = 8.29236, which is the quantile corresponding to the exact probability p = 1-2−54.
Computing the lower tail cumulative probability for any value between the two green lines (8.20954 < x < 8.29236) will give 1-2−53 when using IEEE double precision arithmetic. Computing the lower tail cumulative probability for any value at the right of the right green line (x > 8.29236) will return 1 when using IEEE double precision arithmetic.
The plot is based on evaluating the function in 1 million points.

Figure 4: Relative error in the upper region. The red lines are drawn at ± 1.15 × 10−9.
Pseudo-code algorithm for rational approximation
The algorithm below assumes p
is the input and x
is the output.
Coefficients in rational approximations. a(1) <- -3.969683028665376e+01 a(2) <- 2.209460984245205e+02 a(3) <- -2.759285104469687e+02 a(4) <- 1.383577518672690e+02 a(5) <- -3.066479806614716e+01 a(6) <- 2.506628277459239e+00 b(1) <- -5.447609879822406e+01 b(2) <- 1.615858368580409e+02 b(3) <- -1.556989798598866e+02 b(4) <- 6.680131188771972e+01 b(5) <- -1.328068155288572e+01 c(1) <- -7.784894002430293e-03 c(2) <- -3.223964580411365e-01 c(3) <- -2.400758277161838e+00 c(4) <- -2.549732539343734e+00 c(5) <- 4.374664141464968e+00 c(6) <- 2.938163982698783e+00 d(1) <- 7.784695709041462e-03 d(2) <- 3.224671290700398e-01 d(3) <- 2.445134137142996e+00 d(4) <- 3.754408661907416e+00 Define break-points. p_low <- 0.02425 p_high <- 1 - p_low Rational approximation for lower region. if 0 < p < p_low q <- sqrt(-2*log(p)) x <- (((((c(1)*q+c(2))*q+c(3))*q+c(4))*q+c(5))*q+c(6)) / ((((d(1)*q+d(2))*q+d(3))*q+d(4))*q+1) endif Rational approximation for central region. if p_low <= p <= p_high q <- p - 0.5 r <- q*q x <- (((((a(1)*r+a(2))*r+a(3))*r+a(4))*r+a(5))*r+a(6))*q / (((((b(1)*r+b(2))*r+b(3))*r+b(4))*r+b(5))*r+1) endif Rational approximation for upper region. if p_high < p < 1 q <- sqrt(-2*log(1-p)) x <- -(((((c(1)*q+c(2))*q+c(3))*q+c(4))*q+c(5))*q+c(6)) / ((((d(1)*q+d(2))*q+d(3))*q+d(4))*q+1) endif
Refining the result to full machine precision
If a function is available that can compute the normal cumulative
distribution function to full machine precision, the approximation of
the inverse normal cumulative distribution function can easily be
refined to full machine precision. Here, "full machine precision" means
full double precision using IEEE arithmetic. The complementary error
function, commonly named erfc
, where erfc(x) = 1 - erf(x)
,
is excellent for refining the approximation, since there is a simple
relationship between the normal cumulative distribution function, Phi(x)
and the complementary error function:
Phi(x) = 0.5 * erfc(-x/sqrt(2))
W. J. Cody’s FORTRAN implementations of the error function and scaled and unscaled complementary error function may be found at http://www.netlib.org/specfun/erf.
Pseudo-code algorithm for refinement
The relative error of the approximation has absolute value less than 1.15 × 10−9. One iteration of Halley’s rational method (third order) gives full machine precision. if 0 < p < 1 e <- Phi(x) - p [use this line e <- 0.5 * erfc(-x/sqrt(2)) - p OR this; not both] u <- e * sqrt(2*pi) * exp(x^2/2) x <- x - u/(1 + x*u/2) endif
Outline of the refinement algorithm
Halley rational method is a third order method for finding roots of equations. Assuming f(x)
is the function whose root is to be found and assume f'(x)
and f''(x)
are the first and second derivative, respectively. Then Halley's rational method is
x <- x - f(x)/f'(x) * (1 - (f(x)*f''(x))/(2*f'(x)^2))^(-1)
If x
is close to a root of f(x)
, then (f(x)f''(x))/(2*f'(x)^2)
will be close to zero. If this expression is replaced by zero, the above reduces to the famous Newton’s method
x <- x - f(x)/f'(x)
In this case, the function to solve is
f(x) = Phi(x) - p
The first and second derivatives become
f'(x) = 1/sqrt(2*pi) * exp(-x^2/2) f''(x) = -x * 1/sqrt(2*pi) * exp(-x^2/2) = -x * f'(x)
The second derivative is thus a simple function of the first derivative. Then we can use Halley’s third order method rather than Newton’s second order method with virtually no extra computations. In this case, the formula for Halley’s method reduces to
x <- x - f(x)/f'(x) * (1 + x/2*f(x)/f'(x))^(-1)
which is better written as the following to avoid computing f(x)/f'(x)
twice
u <- f(x)/f'(x) x <- x - u * (1 + x/2*u)^(-1)
Computer implementations
Here are some implementations of the algorithm above. I have written a few of them, but most of them are written by kind people around the world. Use them as you wish, for whatever purpose.
If you write an implementation that you want me to put on this page, then let me know.
bc
- Peter John Acklam (yours faithfully) wrote an implementation for POSIX bc (the arbitrary precision calculator language) and a POSIX sh (shell) wrapper.
C/C++
- Jeremy Lea (Research Engineer, Infrastructure Engineering, Transportek, CSIR, South Africa) wrote a C/C++ implementation. It is tailored for Visual C++, but should compile under GCC with only a few minor changes.
- V. Natarajan (Kanchipuram (near Madras), India), wrote a C implementation.
- Chad Sprouse (The Johns Hopkins University, Applied Physics Laboratory, Information Operations/Intelligence Systems Group, Laurel, MD, USA) wrote a C implementation.
- Dr. Thomas Ziegler (Senior Researcher, Project Manager, ftw. Telecommunications Research Center Vienna, Vienna, Austria) wrote a C implementation. (Note: this file contained a typo in one of the coefficients. This has been fixed.)
Delphi
- Greg McCormick (Water Operational Research Centre, Department of Systems Engineering, Brunel University, Uxbridge, UK) wrote a Delphi implementation.
Fortran
- Ren-Raw Chen (associate professor at Rutgers University in New Brunswick, New Jersey, USA), wrote a Fortran implementation based on John Herrero’s VB implementation.
Java
- Sherali Karimov wrote a Java class.
- Alankar Misra (software engineer at Digital Sutras, Mumbai, India), wrote a Java function, and provided an HTML page with a demo so that the algorithm may be run interactively.
Mathematica
- William Shaw (Professor of Financial Mathematics, King’s College, The Strand, London, UK) made a Mathematica notebook and a PDF image of it.
MATLAB
- Peter John Acklam (yours faithfully) wrote a MATLAB implementation and some related routines.
Oracle PL/SQL
- Martin Smit (Analyst, Enerweb, South Africa) wrote an Oracle PL/SQL implementation.
Perl
- Peter John Acklam (yours faithfully) wrote a Perl implementation.
PHP
- Michael Nickerson wrote a PHP implementation based on Dr. Thomas Ziegler’s C implementation.
Python
- Dan Field wrote a Python implementation.
Visual Basic
- Geoffrey C. Barnes, Ph.D. Fels Center of Government and Jerry Lee Center of Criminology University of Pennsylvania, wrote a VB.NET implementation.
- Richard W. Kulp, Ph.D., CQE, Consultant in Applied Statistics, wrote a VB6 implementation.
- John Herrero wrote a Microsoft Visual Basic implementation. (Note that the coefficients have been rounded to 15 digits, but I don’t think it affects the algorithm.)
Other algorithms
There exists other algorithms for computing the inverse normal cumulative distribution function. Here the are ones I know about.
-
The URL http://lib.stat.cmu.edu/apstat/111 contains FORTRAN code published in: J. D. Beasley and S. G. Springer, Applied Statistics, vol. 26, 1977, pp. 118-121.
I don’t know anything about the accuracy of this algorithm, but a comment in the code states that Applied Statistics algorithm 241 is more accurate.
-
The URL http://lib.stat.cmu.edu/apstat/241 contains FORTRAN code published in: Michael J. Wichura, Applied Statistics, vol. 37, 1988, pp. 477-484.
The FORTRAN code contains two algorithms, one which is accurate to 1 part in 107 and one which is accurate to 1 part in 1016.
-
The Cephes library at the URL http://www.netlib.org/cephes/index.html contains the C program
ndtri.c
.I don’t know anything about the accuracy of this algorithm.
-
Boris Moro. The full Monte. Risk Magazine, 8(2) (February): 57-58, 1995
I don’t know anything about the accuracy of this algorithm.
- Bruce W. Schmeiser: Approximations to the Inverse Cumulative Normal Function for Use on Hand Calculators, Applied Statistics, vol. 28, No. 2 (1979), pp. 175-176
-
Microsoft Excel’s NORMSINV.
Microsoft has used various implementations in NORMSINV, but it has been documented, and admitted by Microsoft, that at least up until 2002 they were poor. Article KB826772 has some documentation about NORMSINV in Excel 2003.
Frequently asked questions (F.A.Q.)
Here are some of the most frequently asked questions I receive about this algorithm.
-
Q: Do you have proof of correctness?
A: (I know that this answer is not good enough. I am working on it.) I have shown in the plots above that the error of the algorithm is inside the specified limits. This has been confirmed by several others who have implemented the algorithm. If this is not satisfactory please let me know what you are missing.
A few people have ended up with values with a larger relative error than 1.15 × 10−9, but in every case this has been due to errors in the computer implementations, not the algorithm as such. Once fixed, the errors are within the specified limits.
-
Q: Has this algorithm been published anywhere?
A: This algorithm has not been published in any journal. This web-page is the most
official
source there is. If you want an algorithm that has been published in a journal, look in the section Other algorithms. -
Q: What about copyright?
A: You can use the algorithm, including any of the computer implementations listed in the section Computer implementations, for whatever purpose you want, but please show common courtesy and give credit where credit is due.
-
Q: I think I have found an error in one of the implementations. What do I do?
A: Please let met know! Better yet, fix it and post a corrected version. I don’t master all the languages in which this algorithm has been implemented, so it might be that all I can do is to add a warning on this web page.
-
Q: How did you find the coefficients?
A: The coefficients were computed by another algorithm I wrote, an iterative algorithm that moves the nodes back and forth to minimize the absolute value of the relative error in the rational approximation. (The nodes are the values of x for which the algorithm is exact, i.e., the values of x for which error in the algorithm is zero.)
Recent changes to this page
- 2009-02-27
- Fixed typographic errors.
- 2007-02-21
- Added more the Mathematica implementation by Dr William Shaw. Also added more information about the algorithm.
- 2005-03-11
- Fixed an error in the bc implementation. The sign in the output was incorrect in the lower region.
- 2005-02-25
- Fixed a typo. (Ren-Raw Chen wrote a Fortran version, not Delphi.)
- 2004-05-04
- Added the PHP implementation by Michael Nickerson and the Python implementation by Dan Field.
- 2004-03-06
- Fixed a typo regarding the range where the algorithm was valid. I had written
x >= 38
, but should have writtenx >= -38
. Hopefully this was obvious and didn’t cause much confusion. Also added the VB.NET implementation by Geoffrey C. Barnes.
References
- W. J. Cody, Rational Chebyshev approximations for the error function, Math. Comp., pp. 631-638, 1969.
- Halley’s Method, at Wolfram MathWorld, URL: http://mathworld.wolfram.com/HalleysMethod.html
- The error function, at Wolfram MathWorld, URL: http://mathworld.wolfram.com/Erfc.html
- The normal distribution function at Wolfram MathWorld, URL: http://mathworld.wolfram.com/NormalDistributionFunction.html