我进CF为什么显示cf nsis errorr in In itrender er [],shu…

为什么我进cf时会出现“Error in intrendenderer0.shutting domn.”_百度知道
为什么我进cf时会出现“Error in intrendenderer0.shutting domn.”
我有更好的答案
我家也出现了Error in Ini tRender er (). Shutting down
网上那些办法我也试了,没用啊我找到了一个办法
我的电脑—属性—硬件—设备管理器—显示卡
打开显示卡列表(点一下显示卡旁边的加号)
如果你显卡左边的小图打了一个红叉,右击启动他
采纳率:34%
碰到错误需要关闭,重新安装个就好了
为您推荐:
其他类似问题
换一换
回答问题,赢新手礼包
个人、企业类
违法有害信息,请在下方选择后提交
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。From Wikipedia, the free encyclopedia
Plot of the error function
In , the error function (also called the Gauss error function) is a
shape that occurs in , , and
describing diffusion. It is defined as:
{\displaystyle {\begin{aligned}\operatorname {erf} (x)&={\frac {1}{\sqrt {\pi }}}\int _{-x}^{x}e^{-t^{2}}\,dt\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt.\end{aligned}}}
In statistics, for nonnegative values of x, the error function has the following interpretation: for a
with mean 0 and variance 1/2, erf(x) describes the probability of Y falling in the range [-x, x].
The name and abbreviation for the error function (and the error function complement) were developed by
in 1871 on account of its connection with "the theory of Probability, and notably the theory of ." Glaisher cites that, for the "law of facility" of errors—the normal distribution—whose
is given by
{\displaystyle f(x)=\left({\frac {c}{\pi }}\right)^{\tfrac {1}{2}}e^{-cx^{2}}}
, the chance of an error lying between
{\displaystyle p}
{\displaystyle q}
{\displaystyle \left({\frac {c}{\pi }}\right)^{\tfrac {1}{2}}\int _{p}^{q}e^{-cx^{2}}dx={\tfrac {1}{2}}\left(\operatorname {erf} (q{\sqrt {c}})-\operatorname {erf} (p{\sqrt {c}})\right){\text{.}}}
The complementary error function, denoted
{\displaystyle \mathrm {erfc} }
, is defined as
{\displaystyle {\begin{aligned}\operatorname {erfc} (x)&=1-\operatorname {erf} (x)\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{x}^{\infty }e^{-t^{2}}\,dt\\[5pt]&=e^{-x^{2}}\operatorname {erfcx} (x),\end{aligned}}}
which also defines
{\displaystyle \mathrm {erfcx} }
, the scaled complementary error function (which can be used instead of erfc to avoid ). Another form of
{\displaystyle \operatorname {erfc} (x)}
for non-negative
{\displaystyle x}
is known as Craig’s formula, after its discoverer:
{\displaystyle \operatorname {erfc} (x\mid x\geq 0)={\frac {2}{\pi }}\int _{0}^{\pi /2}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}\right)\,d\theta .}
This expression is valid only for positive values of x, but it can be used in conjunction with erfc(x) = 2 - erfc(-x) to obtain erfc(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
The imaginary error function, denoted erfi, is defined as
{\displaystyle {\begin{aligned}\operatorname {erfi} (x)&=-i\operatorname {erf} (ix)\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{t^{2}}\,dt\\[5pt]&={\frac {2}{\sqrt {\pi }}}e^{x^{2}}D(x),\end{aligned}}}
where D(x) is the
(which can be used instead of erfi to avoid ).
Despite the name "imaginary error function",
{\displaystyle \operatorname {erfi} (x)}
is real when x is real.
When the error function is evaluated for arbitrary
arguments z, the resulting complex error function is usually discussed in scaled form as the :
{\displaystyle w(z)=e^{-z^{2}}\operatorname {erfc} (-iz)=\operatorname {erfcx} (-iz).}
The error function is related to the cumulative distribution
{\displaystyle \Phi }
, the integral of the , by
{\displaystyle \Phi (x)={\frac {1}{2}}+{\frac {1}{2}}\operatorname {erf} \left(x/{\sqrt {2}}\right)={\frac {1}{2}}\operatorname {erfc} \left(-x/{\sqrt {2}}\right).}
Plots in the complex plane
Integrand exp(-z2)
The property
{\displaystyle \operatorname {erf} (-z)=-\operatorname {erf} (z)}
means that the error function is an . This directly results from the fact that the integrand
{\displaystyle e^{-t^{2}}}
{\displaystyle \operatorname {erf} ({\overline {z}})={\overline {\operatorname {erf} (z)}}}
{\displaystyle {\overline {z}}}
The integrand ? = exp(-z2) and ? = erf(z) are shown in the complex z-plane in figures 2 and 3. Level of Im(?) = 0 is shown with a thick green line. Negative integer values of Im(?) are shown with thick red lines. Positive integer values of Im(f) are shown with thick blue lines. Intermediate levels of Im(?) = constant are shown with thin green lines. Intermediate levels of Re(?) = constant are shown with thin red lines for negative values and with thin blue lines for positive values.
The error function at +∞ is exactly 1 (see ). At the real axis, erf(z) approaches unity at z → +∞ and -1 at z → -∞. At the imaginary axis, it tends to ±i∞.
The e it has no singularities (except that at infinity) and its
always converges.
The defining integral cannot be evaluated in
in terms of , but by expanding the
e-z2 into its
and integrating term by term, one obtains the error function's Maclaurin series as:
{\displaystyle \operatorname {erf} (z)={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{n!(2n+1)}}={\frac {2}{\sqrt {\pi }}}\left(z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}-{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}-\cdots \right)}
which holds for every  z. The denominator terms are sequence
For iterative calculation of the above series, the following alternative formulation may be useful:
{\displaystyle \operatorname {erf} (z)={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }\left(z\prod _{k=1}^{n}{\frac {-(2k-1)z^{2}}{k(2k+1)}}\right)={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z}{2n+1}}\prod _{k=1}^{n}{\frac {-z^{2}}{k}}}
{\displaystyle {\frac {-(2k-1)z^{2}}{k(2k+1)}}}
expresses the multiplier to turn the kth term into the (k + 1)th term (considering z as the first term).
The imaginary error function has a very similar Maclaurin series, which is:
{\displaystyle \operatorname {erfi} (z)={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z^{2n+1}}{n!(2n+1)}}={\frac {2}{\sqrt {\pi }}}\left(z+{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}+{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}+\cdots \right)}
which holds for every  z.
The derivative of the error function follows immediately from its definition:
{\displaystyle {\frac {d}{dz}}\operatorname {erf} (z)={\frac {2}{\sqrt {\pi }}}e^{-z^{2}}.}
From this, the derivative of the imaginary error function is also immediate:
{\displaystyle {\frac {d}{dz}}\operatorname {erfi} (z)={\frac {2}{\sqrt {\pi }}}e^{z^{2}}.}
of the error function, obtainable by , is
{\displaystyle z\operatorname {erf} (z)+{\frac {e^{-z^{2}}}{\sqrt {\pi }}}.}
An antiderivative of the imaginary error function, also obtainable by integration by parts, is
{\displaystyle z\operatorname {erfi} (z)-{\frac {e^{z^{2}}}{\sqrt {\pi }}}.}
Higher order derivatives are given by
{\displaystyle {\operatorname {erf} }^{(k)}(z)={2(-1)^{k-1} \over {\sqrt {\pi }}}{\mathit {H}}_{k-1}(z)e^{-z^{2}}={\frac {2}{\sqrt {\pi }}}{\frac {d^{k-1}}{dz^{k-1}}}\left(e^{-z^{2}}\right),\qquad k=1,2,\dots }
{\displaystyle {\mathit {H}}}
are the physicists' .
An expansion, which converges more rapidly for all real values of
{\displaystyle x}
than a Taylor expansion, is obtained by using 's theorem:
{\displaystyle {\begin{aligned}&\operatorname {erf} (x)\\[8pt]={}&{\frac {2}{\sqrt {\pi }}}\operatorname {sgn} (x){\sqrt {1-e^{-x^{2}}}}\left(1-{\frac {1}{12}}(1-e^{-x^{2}})-{\frac {7}{480}}(1-e^{-x^{2}})^{2}\right.\\[6pt]&\qquad \qquad \qquad \qquad \qquad \qquad \left.{}-{\frac {5}{896}}(1-e^{-x^{2}})^{3}-{\frac {787}{276480}}(1-e^{-x^{2}})^{4}-\cdots \right)\\[10pt]={}&{\frac {2}{\sqrt {\pi }}}\operatorname {sgn} (x){\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+\sum _{k=1}^{\infty }c_{k}e^{-kx^{2}}\right).\end{aligned}}}
By keeping only the first two coefficients and choosing
{\displaystyle c_{1}={\frac {31}{200}}}
{\displaystyle c_{2}=-{\frac {341}{8000}}}
, the resulting approximation shows its largest relative error at
{\displaystyle \textstyle x=\pm 1.3796}
, where it is less than
{\displaystyle \textstyle 3.6127\cdot 10^{-3}}
{\displaystyle \operatorname {erf} (x)\approx {\frac {2}{\sqrt {\pi }}}\operatorname {sgn} (x){\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+{\frac {31}{200}}e^{-x^{2}}-{\frac {341}{8000}}e^{-2x^{2}}\right).}
Inverse error function
Given complex number z, there is not a unique complex number w satisfying
{\displaystyle \operatorname {erf} (w)=z}
, so a true inverse function would be multivalued. However, for -1 & x & 1, there is a unique real number denoted
{\displaystyle \operatorname {erf} ^{-1}(x)}
satisfying
{\displaystyle \operatorname {erf} \left(\operatorname {erf} ^{-1}(x)\right)=x}
The inverse error function is usually defined with domain (-1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk |z| & 1 of the complex plane, using the Maclaurin series
{\displaystyle \operatorname {erf} ^{-1}(z)=\sum _{k=0}^{\infty }{\frac {c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},}
where c0 = 1 and
{\displaystyle c_{k}=\sum _{m=0}^{k-1}{\frac {c_{m}c_{k-1-m}}{(m+1)(2m+1)}}=\left\{1,1,{\frac {7}{6}},{\frac {127}{90}},{\frac {}},{\frac {3}},\ldots \right\}.}
So we have the series expansion (note that common factors have been canceled from numerators and denominators):
{\displaystyle \operatorname {erf} ^{-1}(z)={\tfrac {1}{2}}{\sqrt {\pi }}\left(z+{\frac {\pi }{12}}z^{3}+{\frac {7\pi ^{2}}{480}}z^{5}+{\frac {127\pi ^{3}}{40320}}z^{7}+{\frac {4369\pi ^{4}}{5806080}}z^{9}+{\frac {34807\pi ^{5}}{}}z^{11}+\cdots \right).}
(After cancellation the numerator/denominator fractions are entries
without cancellation the numerator terms are given in entry
.) Note that the error function's value at ±∞ is equal to ±1.
For |z| & 1, we have
{\displaystyle \operatorname {erf} \left(\operatorname {erf} ^{-1}(z)\right)=z}
The inverse complementary error function is defined as
{\displaystyle \operatorname {erfc} ^{-1}(1-z)=\operatorname {erf} ^{-1}(z).}
For real x, there is a unique real number
{\displaystyle \operatorname {erfi} ^{-1}(x)}
satisfying
{\displaystyle \operatorname {erfi} \left(\operatorname {erfi} ^{-1}(x)\right)=x}
. The inverse imaginary error function is defined as
{\displaystyle \operatorname {erfi} ^{-1}(x)}
For any real x,
can be used to compute
{\displaystyle \operatorname {erfi} ^{-1}(x)}
{\displaystyle -1\leq x\leq 1}
, the following Maclaurin series converges:
{\displaystyle \operatorname {erfi} ^{-1}(z)=\sum _{k=0}^{\infty }{\frac {(-1)^{k}c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},}
where ck is defined as above.
of the complementary error function (and therefore also of the error function) for large real x is
{\displaystyle \operatorname {erfc} (x)={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\left[1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {1\cdot 3\cdot 5\cdots (2n-1)}{(2x^{2})^{n}}}\right]={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{(2x^{2})^{n}}},}
where (2n – 1)!! is the : the product of all odd numbers up to (2n – 1). This series diverges for every finite x, and its meaning as asymptotic expansion is that, for any
{\displaystyle N\in \mathbb {N} }
{\displaystyle \operatorname {erfc} (x)={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{N-1}(-1)^{n}{\frac {(2n-1)!!}{(2x^{2})^{n}}}+R_{N}(x)}
where the remainder, in , is
{\displaystyle R_{N}(x)=O\left(x^{1-2N}e^{-x^{2}}\right)}
{\displaystyle x\to \infty .}
Indeed, the exact value of the remainder is
{\displaystyle R_{N}(x):={\frac {(-1)^{N}}{\sqrt {\pi }}}2^{1-2N}{\frac {(2N)!}{N!}}\int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,dt,}
which follows easily by induction, writing
{\displaystyle e^{-t^{2}}=-(2t)^{-1}\left(e^{-t^{2}}\right)'}
and integrating by parts.
For large enough values of x, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of erfc(x) (while for not too large values of x note that the above Taylor expansion at 0 provides a very fast convergence).
expansion of the complementary error function is:
{\displaystyle \operatorname {erfc} (z)={\frac {z}{\sqrt {\pi }}}e^{-z^{2}}{\cfrac {1}{z^{2}+{\cfrac {a_{1}}{1+{\cfrac {a_{2}}{z^{2}+{\cfrac {a_{3}}{1+\dotsb }}}}}}}}\qquad a_{m}={\frac {m}{2}}.}
{\displaystyle \operatorname {erf} \left[{\frac {b-ac}{\sqrt {1+2a^{2}d^{2}}}}\right]=\int _{-\infty }^{\infty }{\frac {\operatorname {erf} \left(ax+b\right)}{\sqrt {2\pi d^{2}}}}\exp {\left[-{\frac {(x+c)^{2}}{2d^{2}}}\right]}\,dx,\quad a,b,c,d\in \mathbb {R} }
The inverse
{\displaystyle {\begin{aligned}\operatorname {erfc} z&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}Q_{n}}{{(z^{2}+1)}^{\bar {n}}}}\\&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\left(1-{\frac {1}{2}}{\frac {1}{(z^{2}+1)}}+{\frac {1}{4}}{\frac {1}{(z^{2}+1)(z^{2}+2)}}-\cdots \right)\end{aligned}}}
converges for
{\displaystyle \operatorname {Re} (z^{2})&0.}
{\displaystyle Q_{n}{\stackrel {\text{def}}{=}}{\frac {1}{\Gamma (1/2)}}\int _{0}^{\infty }\tau (\tau -1)\cdots (\tau -n+1)\tau ^{-1/2}e^{-\tau }d\tau =\sum _{k=0}^{n}\left({\frac {1}{2}}\right)^{\bar {k}}s(n,k),}
{\displaystyle z^{\bar {n}}}
denotes the , and
{\displaystyle s(n,k)}
denotes a signed .
give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
{\displaystyle \operatorname {erf} (x)\approx 1-{\frac {1}{(1+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+a_{4}x^{4})^{4}}},\qquad x\geq 0}
    (maximum error: 5×10-4)
where a1 = 0.278393, a2 = 0.230389, a3 = 0.000972, a4 = 0.078108
{\displaystyle \operatorname {erf} (x)\approx 1-(a_{1}t+a_{2}t^{2}+a_{3}t^{3})e^{-x^{2}},\quad t={\frac {1}{1+px}},\qquad x\geq 0}
    (maximum error: 2.5×10-5)
where p = 0.47047, a1 = 0.3480242, a2 = -0.0958798, a3 = 0.7478556
{\displaystyle \operatorname {erf} (x)\approx 1-{\frac {1}{(1+a_{1}x+a_{2}x^{2}+\cdots +a_{6}x^{6})^{16}}},\qquad x\geq 0}
    (maximum error: 3×10-7)
where a1 = 0., a2 = 0., a3 = 0., a4 = 0., a5 = 0., a6 = 0.
{\displaystyle \operatorname {erf} (x)\approx 1-(a_{1}t+a_{2}t^{2}+\cdots +a_{5}t^{5})e^{-x^{2}},\quad t={\frac {1}{1+px}}}
    (maximum error: 1.5×10-7)
where p = 0.3275911, a1 = 0., a2 = -0., a3 = 1., a4 = -1., a5 = 1.
All of these approximations are valid for x ≥ 0. To use these approximations for negative x, use the fact that erf(x) is an odd function, so erf(x) = -erf(-x).
Another approximation is given by
{\displaystyle \operatorname {erf} (x)\approx \operatorname {sgn} (x){\sqrt {1-\exp \left(-x^{2}{\frac {{\frac {4}{\pi }}+ax^{2}}{1+ax^{2}}}\right)}}}
{\displaystyle a={\frac {8(\pi -3)}{3\pi (4-\pi )}}\approx 0.140012.}
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the error is less than 0.00035 for all x. Using the alternate value a ≈ 0.147 reduces the maximum error to about 0.00012.
This approximation can also be inverted to calculate the inverse error function:
{\displaystyle \operatorname {erf} ^{-1}(x)\approx \operatorname {sgn} (x){\sqrt {{\sqrt {\left({\frac {2}{\pi a}}+{\frac {\ln(1-x^{2})}{2}}\right)^{2}-{\frac {\ln(1-x^{2})}{a}}}}-\left({\frac {2}{\pi a}}+{\frac {\ln(1-x^{2})}{2}}\right)}}.}
Exponential bounds and a pure exponential approximation for the complementary error function are given by
{\displaystyle {\begin{aligned}\operatorname {erfc} (x)&\leq {\frac {1}{2}}e^{-2x^{2}}+{\frac {1}{2}}e^{-x^{2}}\leq e^{-x^{2}},\qquad x&0\\\operatorname {erfc} (x)&\approx {\frac {1}{6}}e^{-x^{2}}+{\frac {1}{2}}e^{-{\frac {4}{3}}x^{2}},\qquad x&0.\end{aligned}}}
A single-term lower bound is
{\displaystyle \operatorname {erfc} (x)\geq {\sqrt {\frac {2e}{\pi }}}{\frac {\sqrt {\beta -1}}{\beta }}e^{-\beta x^{2}},\qquad x\geq 0,\beta &1,}
where the parameter β can be picked to minimize error on the desired interval of approximation.
Over the complete range of values, there is an approximation with a maximal error of
{\displaystyle 1.2\times 10^{-7}}
, as follows:
{\displaystyle \operatorname {erf} (x)={\begin{cases}1-\tau &{\text{for }}x\geq 0\\\tau -1&{\text{for }}x&0\end{cases}}}
{\displaystyle {\begin{aligned}\tau ={}&t\cdot \exp \left(-x^{2}-1...t^{2}+0.t^{3}\right.\\&\left.{}-0.t^{4}+0.t^{5}-1.t^{6}+1.t^{7}\right.\\&\left.{}-0.t^{8}+0.t^{9}\right)\end{aligned}}}
{\displaystyle t={\frac {1}{1+0.5|x|}}.}
Also, over the complete range of values, the following simple approximation holds for
{\displaystyle \operatorname {erfc} (x)}
, with a maximal error of
{\displaystyle 6.3\times 10^{-4}}
{\displaystyle \operatorname {erfc} (x)=1.3693\exp \left(-0.8072(x+0.6388)^{2}\right)}
When the results of a series of measurements are described by a
{\displaystyle \textstyle \sigma }
{\displaystyle \textstyle \operatorname {erf} \left({\frac {a}{\sigma {\sqrt {2}}}}\right)}
is the probability that the error of a single measurement lies between -a and +a, for positive a. This is useful, for example, in determining the
of a digital communication system.
The error and complementary error functions occur, for example, in solutions of the
are given by the .
The error function and its approximations can be used to estimate results that hold . Given random variable
{\displaystyle X\sim \operatorname {Norm} [\mu ,\sigma ]}
and constant
{\displaystyle L&\mu }
{\displaystyle \Pr[X\leq L]={\frac {1}{2}}+{\frac {1}{2}}\operatorname {erf} \left({\frac {L-\mu }{{\sqrt {2}}\sigma }}\right)\approx A\exp \left(-B\left({\frac {L-\mu }{\sigma }}\right)^{2}\right)}
where A and B are certain numeric constants. If L is sufficiently far from the mean, i.e.
{\displaystyle \mu -L\geq \sigma {\sqrt {\ln {k}}}}
{\displaystyle \Pr[X\leq L]\leq A\exp(-B\ln {k})={\frac {A}{k^{B}}}}
so the probability goes to 0 as
{\displaystyle k\to \infty }
The error function is essentially identical to the standard , denoted Φ, also named norm(x) by software languages, as they differ only by scaling and translation. Indeed,
{\displaystyle \Phi (x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{\tfrac {-t^{2}}{2}}\,dt={\frac {1}{2}}\left[1+\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)\right]={\frac {1}{2}}\operatorname {erfc} \left(-{\frac {x}{\sqrt {2}}}\right)}
or rearranged for erf and erfc:
{\displaystyle {\begin{aligned}\operatorname {erf} (x)&=2\Phi \left(x{\sqrt {2}}\right)-1\\\operatorname {erfc} (x)&=2\Phi \left(-x{\sqrt {2}}\right)=2\left(1-\Phi \left(x{\sqrt {2}}\right)\right).\end{aligned}}}
Consequently, the error function is also closely related to the , which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as
{\displaystyle Q(x)={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)={\frac {1}{2}}\operatorname {erfc} \left({\frac {x}{\sqrt {2}}}\right).}
{\displaystyle \textstyle \Phi }
is known as the , or
function and may be expressed in terms of the inverse error function as
{\displaystyle \operatorname {probit} (p)=\Phi ^{-1}(p)={\sqrt {2}}\operatorname {erf} ^{-1}(2p-1)=-{\sqrt {2}}\operatorname {erfc} ^{-1}(2p).}
The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.
The error function is a special case of the , and can also be expressed as a
(Kummer’s function):
{\displaystyle \operatorname {erf} (x)={\frac {2x}{\sqrt {\pi }}}M\left({\frac {1}{2}},{\frac {3}{2}},-x^{2}\right).}
It has a simple expression in terms of the .[]
In terms of the
P and the ,
{\displaystyle \operatorname {erf} (x)=\operatorname {sgn} (x)P\left({\frac {1}{2}},x^{2}\right)={\operatorname {sgn} (x) \over {\sqrt {\pi }}}\gamma \left({\frac {1}{2}},x^{2}\right).}
{\displaystyle \textstyle \operatorname {sgn} (x)}
Graph of generalised error functions En(x):
grey curve: E1(x) = (1 - e -x)/
{\displaystyle \scriptstyle {\sqrt {\pi }}}
red curve: E2(x) = erf(x)
green curve: E3(x)
blue curve: E4(x)
gold curve: E5(x).
Some authors discuss the more general functions:[]
{\displaystyle E_{n}(x)={\frac {n!}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{n}}\,dt={\frac {n!}{\sqrt {\pi }}}\sum _{p=0}^{\infty }(-1)^{p}{\frac {x^{np+1}}{(np+1)p!}}.}
Notable cases are:
E0(x) is a straight line through the origin:
{\displaystyle \textstyle E_{0}(x)={\frac {x}{e{\sqrt {\pi }}}}}
E2(x) is the error function, erf(x).
After division by n!, all the En for odd n look similar (but not identical) to each other. Similarly, the En for even n look similar (but not identical) to each other after a simple division by n!. All generalised error functions for n & 0 look similar on the positive x side of the graph.
These generalised functions can equivalently be expressed for x & 0 using the
{\displaystyle E_{n}(x)={\frac {1}{\sqrt {\pi }}}\Gamma (n)\left(\Gamma \left({\frac {1}{n}}\right)-\Gamma \left({\frac {1}{n}},x^{n}\right)\right),\quad \quad x&0.}
Therefore, we can define the error function in terms of the incomplete Gamma function:
{\displaystyle \operatorname {erf} (x)=1-{\frac {1}{\sqrt {\pi }}}\Gamma \left({\frac {1}{2}},x^{2}\right).}
The iterated integrals of the complementary error function are defined by
{\displaystyle \operatorname {i^{n}erfc} (z)=\int _{z}^{\infty }\operatorname {i^{n-1}erfc} (\zeta )\,d\zeta .}
{\displaystyle \operatorname {i^{0}erfc} (z)=\operatorname {erfc} (z)}
{\displaystyle \operatorname {i^{1}erfc} (z)=\operatorname {ierfc} (z)={\frac {1}{\sqrt {\pi }}}e^{-z^{2}}-z\operatorname {erfc} (z)}
{\displaystyle \operatorname {i^{2}erfc} (z)={\frac {1}{4}}\left[\operatorname {erfc} (z)-2z\operatorname {ierfc} (z)\right]}
The general recurrence formula is
{\displaystyle 2n\operatorname {i^{n}erfc} (z)=\operatorname {i^{n-2}erfc} (z)-2z\operatorname {i^{n-1}erfc} (z)}
They have the power series
{\displaystyle i^{n}\operatorname {erfc} (z)=\sum _{j=0}^{\infty }{\frac {(-z)^{j}}{2^{n-j}j!\Gamma \left(1+{\frac {n-j}{2}}\right)}},}
from which follow the symmetry properties
{\displaystyle i^{2m}\operatorname {erfc} (-z)=-i^{2m}\operatorname {erfc} (z)+\sum _{q=0}^{m}{\frac {z^{2q}}{2^{2(m-q)-1}(2q)!(m-q)!}}}
{\displaystyle i^{2m+1}\operatorname {erfc} (-z)=i^{2m+1}\operatorname {erfc} (z)+\sum _{q=0}^{m}{\frac {z^{2q+1}}{2^{2(m-q)-1}(2q+1)!(m-q)!}}.}
, over the whole real line
, derivative
, renormalized imaginary error function
, a scaled and shifted form of error function
, the inverse or
of the normal CDF
, the tail probability of the normal distribution
Andrews, Larry C.;
Greene, William H.; Econometric Analysis (fifth edition), Prentice-Hall, 1993, p. 926, fn. 11
Glaisher, James Whitbread Lee (July 1871). . London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 4. Taylor & Francis. 42 (277): 294–302 2017.
Glaisher, James Whitbread Lee (September 1871). . London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 4. Taylor & Francis. 42 (279): 421–436 2017.
Cody, W. J. (March 1993),
(PDF), , 19 (1): 22–32, :
Zaghloul, M. R. (March 1, 2007), , , 375 (3): , :
John W. Craig, , Proceedings of the 1991 IEEE Military Communication Conference, vol. 2, pp. 571–575.
H. M. Sch?pf and P. H. Supancic, "On Bürmann's Theorem and Its Application to Problems of Linear and Nonlinear Heat Transfer and Diffusion," The Mathematica Journal, 2014. doi:10.3888/tmj.16–11.
. Wolfram MathWorld—A Wolfram Web Resource.
Bergsma, Wicher.
Cuyt, Annie A. M.; Petersen, Vigdis B.; Verdonk, B Waadeland, H Jones, William B. (2008). Handbook of Continued Fractions for Special Functions. Springer-Verlag.  .
(in German). 4: 390–415.
Eq (3) on page 283 of Nielson, Niels (1906).
(in German). Leipzig: B. G. Teubner.
Winitzki, Sergei (6 February 2008).
Chang, Seok-Ho; Cosman, Pamela C.; Milstein, Laurence B. (November 2011). . IEEE Transactions on Communications. 59 (11): . :.
Numerical Recipes in Fortran 77: The Art of Scientific Computing ( ), 1992, page 214, Cambridge University Press.
(1959), Conduction of Heat in Solids (2nd ed.), Oxford University Press,  , p 484
; , eds. (1983) [June 1964]. . . Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of S Dover Publications. p. 297.  .  .  .  .
Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), , Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press,  
Temme, Nico M. (2010), , Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W., , Cambridge University Press,  ,  
: Hidden categories:

我要回帖

更多关于 cf nsis error 的文章

 

随机推荐