Newton's Raphson Method
In numerical analysis, Newton's method(also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the zeroes (or roots) of a real-valued function.
Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem. This is discussed in detail below. Unfortunately, when iteration begins far from the desired root, Newton's method can easily lead an unwary user astray with little warning. Thus, good implementations of the method embed it in a routine that also detects and perhaps overcomes possible convergence failures. Given a function ƒ(x) and its derivative ƒ '(x), we begin with a first guess x0.
Provided the function is reasonably well-behaved a better approximation
x1 is
The process is repeated until a sufficiently accurate value is reached:
An important and somewhat surprising application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number using only multiplication and subtraction.
The algorithm is first in the class of Householder's methods, succeeded by Halley's method.
EXAMPLE
Square root of a number
Consider the problem of finding the square root of a number. There are many methods of computing square roots, and Newton's method is one.
For example, if one wishes to find the square root of 612, this is equivalent to finding the solution to
No hay comentarios:
Publicar un comentario