In this worksheet we show how to use Chebyshev polynomials to economize power series.
0. Chebyshev Polynomials
| > | n := 10: |
| > | for i from 0 to n do |
| > | expand(ChebyshevT(i,x)); |
| > | end do; |
The Chebyshev polynomials are written in the standard basis, i.e.: as linear combinations of the monomials x^i. We make this relation explicit with linear algebra, viewing the polynomials generated above as the result of a matrix-vector product.
| > | with(linalg): |
Warning, the protected names norm and trace have been redefined and unprotected
| > | cm := matrix(n+1,n+1): |
| > | for i from 0 to n do |
| > | for j from 0 to n do |
| > | cm[i+1,j+1] := coeff(expand(ChebyshevT(i,x)),x,j); |
| > | end do; |
| > | end do; |
| > | print(cm); |
| > | v := vector(n+1,[seq(x^i,i=0..n)]); |
| > | t := evalm(cm&*v); |
![]()
![]()
If we take the inverse of the coefficient matrix cm, then we can write the standard monomials
| > | icm := inverse(cm); |
| > | tv := vector(n+1,[seq(ChebyshevT(i,x),i=0..n)]); |
![]()
![]()
| > | xtv := evalm(icm&*tv); |
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
With the vector xtv, we can express any polynomial of degree n or less as a linear combination of Chebyshev polynomials.
We will use this to create "economized power series":
1. Economizing power series - first example
| > | ts_exp := taylor(exp(z),z=0); |
| > | ps_exp := convert(ts_exp,polynom); |
| > | cs_exp := ps_exp: |
| > | for i from 0 to 5 do |
| > | cs_exp := subs(z^(5-i) = xtv[5-i+1],cs_exp): |
| > | end do: |
| > | cs_exp; |
![]()
Of course, if we expand the Chebyshev polynomials again in terms of powers of x, we obtain the same polynomial back:
| > | map(t->expand(t),cs_exp); |
We chop off the last two terms of the Chebyshev series before expanding to monomials in x:
| > | es_exp := cs_exp - op(5,cs_exp) - op(6,cs_exp); |
![]()
This is the economized series expansion of the exponential function:
| > | es_exp := map(t->expand(t),es_exp); |
We compare it with a Maclauring expansion up to the same degree.
| > | ml_exp := convert(taylor(exp(x),x=0,4),polynom); |
| > | es_err := exp(x)-es_exp: |
| > | ml_err := exp(x)-ml_exp: |
| > | plot([es_err,ml_err],x=-1..1); |
![[Plot]](images/cheb_41.gif)
We see that the error of the economized power series is much more uniform than the one obtained by the Maclaurin expansion, at the expense of being less accurate around the origin.
To measure the total error, we take the integral over [-1,+1] of the absolute value of the error functions:
| > | evalf(Int(abs(ml_err),x=-1..1)); # error of the Maclaurin expansions |
| > | evalf(Int(abs(es_err),x=-1..1)); # error of the economized Chebyshev series |
We see that the global error of the economized Chebyshev series is indeed larger, but still of the same magnitude of the truncated Maclaurin expansion.
2. Economizing power series - second example
The effects of economizing are supposed to be more dramatic with slowly converging series:
| > | m := 10: |
| > | ts_slow := taylor(1/(1+z),z=0,m); |
| > | ps_slow := convert(ts_slow,polynom); |
| > | cs_slow := ps_slow: |
| > | for i from 0 to m do |
| > | cs_slow := subs(z^(m-i) = xtv[m-i+1],cs_slow): |
| > | end do: |
| > | cs_slow; |
![]()
![]()
Of course, if we expand the Chebyshev polynomials again in terms of powers of x, we obtain the same polynomial back:
| > | map(t->expand(t),cs_slow); |
We chop off the last two terms of the Chebyshev series before expanding to monomials in x:
| > | es_slow := cs_slow - op(m-3,cs_slow) - op(m-2,cs_slow) - op(m-1,cs_slow) - op(m,cs_slow); |
![]()
This is the economized series expansion of the exponential function:
| > | es_slow := map(t->expand(t),es_slow); |
We compare it with a Maclauring expansion up to the same degree.
| > | ml_slow := convert(taylor(1/(1+x),x=0,m-4),polynom); |
| > | es_err := 1/(1+x)-es_slow: |
| > | ml_err := 1/(1+x)-ml_slow: |
| > | plot([es_err,ml_err],x=-0.7..0.9); |
![[Plot]](images/cheb_54.gif)
Here we see the errors in both cases increase dramatically as we near the borders of the interval [-1,1].
Let us again compare the global errors:
| > | evalf(Int(abs(ml_err),x=-0.7..0.9)); |
| > | evalf(Int(abs(es_err),x=-0.7..0.9)); |
Here we observe at first a large difference between the two global errors. If we enlarge the range of the interval, we see the situation reversed: the global error of the economized Chebyshev series is actually less than the global error of the Maclaurin expansion:
| > | evalf(Int(abs(ml_err),x=-0.9..1.0)); |
| > | evalf(Int(abs(es_err),x=-0.9..1.0)); |
The plot explains visualizes this:
| > | plot([es_err,ml_err],x=-0.9..1.0); |
![[Plot]](images/cheb_59.gif)
| > |