# Taylor Expansion
One of the most powerful tools math has to offer for approximating functions. The inspiration behind them is to take non-polynomial functions and find their polynomial equivalents that approximate them near some input. Polynomials are much easier to compute, easier to take derivatives and easier to integrate.
![[approximation of cosine using derivatives.jpg]]
In the above figure, coefficient $C_1$ was responsible for making sure it matched the value of cosine for $x=0$, $C_2$ was responsible for making sure it matched the slope (first derivative) of the cosine, and $C_3$ was responsible for making sure it matched the rate at which the slope changes(second derivate) of cosine. Combining together, we get a nice quadratic polynomial approximation for cosine function.
Notice a few regularities in this process:
1. Factorials come very naturally when we take n successive derrivatives of function $x^n$ due to the power rule. So for each polynomial term $x^n$, we divide it by $n!$ to counter this cascading effect produced by the power rule.
2. Adding on new terms doesn't mess up what the old terms should be. Each derivative of polynomial at $x=0$ is only controlled by one and only one of the coefficients. If approximating near other inputs, adding new terms affect the previous terms.
This can be summed up as taking the information about higher order derivatives of function at a single point, and translating that into the information of the value of that information near that point.
Thus we can generalize this and obtain *Taylor polynomial* as,
$
P(x)=f(0)+\frac{d f}{d x}(0) \frac{x^{1}}{1 !}+\frac{d^{2} f}{d x^{2}}(0) \frac{x^{2}}{2 !}+\frac{d^{3} f}{d x^{3}}(0) \frac{x^{3}}{3 !}+\cdots
$
Now if we were to approximate the function at other point than $x=0$, say $x=a$, then we can generalize this further into
$
P(x)=f(a)+\frac{d f}{d x}(a) \frac{(x-a)^{1}}{1 !}+\frac{d^{2} f}{d x^{2}}(a) \frac{(x-a)^{2}}{2 !}+\cdots
$
This infinite expansion of terms is known as the *Taylor series expansion of $f$*.
For example, lets take our function as $e^x$. We know any $n$ order derivative of $e^x$ is still $e^x$, and $e^0=1$. So using the generalization above, we can approximate $e$ as the polynomial of form
$
P(x)=1+\frac{x^{1}}{1 !}+ \frac{x^{2}}{2 !}+ \frac{x^{3}}{3 !}+ \frac{x^{4}}{4 !}+\cdots
$
We can also gain some insight into Taylor series by looking at the geometric interpretation of the second term of the polynomial as below
![[geometric interpretation of taylor.jpg]]
## Convergence of Taylor series
Not all Taylor series converge to the true value of function at every point.
To the point where approximation converges is called the *Radius of Convergence* of Taylor series.
---
## References
1. 3Blue1Brown video: https://www.youtube.com/watch?v=3d6DsjIBzJ4