# This worksheet contains the Maple commands from Chapter 9 of # Introduction to Scientific Programming by Joseph L. Zachary. -------------------------------------------------------------------------------- # # (9.1) The area of the "slice" of circle from Figure 9.1. # > X := theta/(2*Pi) * Pi*R^2; -------------------------------------------------------------------------------- # # (9.2) The length of the side "R" from Figures 9.2 and 9.3. # > R := 2 * r * cos(theta/2); -------------------------------------------------------------------------------- # # (9.3) The area of each of the triangles from Figure 9.2. # > Y := 1/2 * R * r * sin(theta/2); -------------------------------------------------------------------------------- # # (9.4) The area of region "Z" from Figure 9.2. # > Z := (2*theta)/(2*Pi) * Pi*r^2; -------------------------------------------------------------------------------- # # (9.5) The area of the region where the cow will be unable to graze. # > A := Z - (X - 2*Y); -------------------------------------------------------------------------------- # # (9.6) We simplify the expression from (9.5) by using the "combine" # function. # > combine(A, trig);\ -------------------------------------------------------------------------------- # # (9.9) A simple example of using the "solve" function. # > solve(2*x = 4, x); -------------------------------------------------------------------------------- # # (9.10) Using the "solve" function to solve two simultaneous linear # equations. # > solve({3*x + 2*y = 7, x + y = 3}, {x,y}); -------------------------------------------------------------------------------- # # (9.11) Using the "solve" function to solve a quadratic equation. # > solve(2*x^2 - 3*x - 9 = 0, x); -------------------------------------------------------------------------------- # # (9.12) Using the "solve" function to solve a trigonometric equation. # > solve(cos(alpha) = sin(alpha), alpha); -------------------------------------------------------------------------------- # # (9.13) Using the "solve" function to solve a symbolic quadratic equation. # Notice that the result is the quadratic formula. # > solve(a*x^2 + b*x + c = 0, x); -------------------------------------------------------------------------------- # # (9.14) Unfortunately, "solve" can't solve just anything. Here we try # (and fail) to solve the equation that is at the root of Old MacDonald's # problem. # > solve(sin(theta) - theta*cos(theta) = Pi/2, theta); -------------------------------------------------------------------------------- # # (9.15) The "fsolve" function uses numerical techniques to solve # equation. It often succeeds where "solve" fails. # > angle := fsolve(sin(theta) - theta*cos(theta) = Pi/2, theta); -------------------------------------------------------------------------------- # # (9.16) We compare the values of the two sides of the equation from # (9.15) when the solution ("angle") is substituted for "theta". # > sin(angle) - angle*cos(angle);\ evalf(Pi/2); -------------------------------------------------------------------------------- # # (9.17) Because it operates numerically, "fsolve" cannot be used to find # symbolic solutions. # > fsolve(a*x^2 + b*x + c = 0, x); -------------------------------------------------------------------------------- # # (9.18) We need to find the root of this function to solve Old # MacDonald's problem. # > cow := (theta) -> sin(theta) - theta*cos(theta) - evalf(Pi/2); -------------------------------------------------------------------------------- # # (9.19) This demonstrates that the "cow" function is positive at 2.5 and is # negative at 1.5. Thus, a root of "cow" must lie somewhere in between. # > cow(2.5);\ cow(1.5); -------------------------------------------------------------------------------- # # (9.20) This plot verifies that a root of "cow" lies between 1.5 and 2.5. # > plot(cow(theta), theta = 1.5 .. 2.5); -------------------------------------------------------------------------------- # # (9.21) Midway between 1.5 and 2.5, "cow" is positive. This means that # a root lies somewhere between 1.5 (where "cow" is negative) and 2.0. # > cow(2.0); -------------------------------------------------------------------------------- # # (9.22) This plot verifies that a root of "cow" lies between 1.5 and 2.0. # > plot(cow(theta), theta = 1.5 .. 2.0); -------------------------------------------------------------------------------- # # (9.23) Midway between 1.5 and 2.0, "cow" is negative. This narrows # the interval in which a root of "cow" must lie to 1.75--2.0. # > cow(1.75); -------------------------------------------------------------------------------- # # (9.24) It would be nice if we had a bisection procedure that worked as # illustrated below. We would pass it the function whose root we wished # to find, a value at which the function was positive, and a value at which # the function was negative. For the remainder of this worksheet we will # work towards developing such a procedure. # > bisection(cow, 2.5, 1.5); -------------------------------------------------------------------------------- # # (9.25) The order in which statements are executed out can make a # difference. Compare the final values of "x" and "y" in this sequence ... # > x := 1:\ x := 2;\ y := x^2; -------------------------------------------------------------------------------- # # (9.26) ... to the final values of "x" and "y" in this sequence. The only # difference is that we have permuted the order of the last two assignment # statements. # > x := 1:\ y := x^2;\ x := 2; -------------------------------------------------------------------------------- # # (Figure 9.5) This is the beginning of our attempt to automate the # bisection process. We begin with a function "f" to be solved and values # "pos" and "neg" such that f(pos)>0 and f(neg)<0. We repeatedly # average "pos" and "neg", test the value of "f" at the average, and update # either "pos" or "neg" appropriately. Notice that we (as the creators of # the statement sequence) had to determine whether to update "pos" or # "neg" at each step. If we were to repeat this process enough times, we # would narrow down on a root of "f". # > f := cow;\ pos := 2.5;\ neg := 1.5;\ \ avg := (pos + neg) / 2.0;\ f(avg);\ pos := avg;\ \ avg := (pos + neg) / 2.0;\ f(avg);\ neg := avg; -------------------------------------------------------------------------------- # # (Figure 9.6) Here we have inserted conditional statements into our # statement sequences. These conditionals decide whether to update "pos" # or "neg" at each step. # > f := cow;\ pos := 2.5;\ neg := 1.5;\ \ avg := (pos + neg) / 2.0;\ if (f(avg) >= 0) then\ pos := avg;\ else\ neg := avg;\ fi;\ \ avg := (pos + neg) / 2.0;\ if (f(avg) >= 0) then\ pos := avg;\ else\ neg := avg;\ fi; -------------------------------------------------------------------------------- # # (Figure 9.7) The example above contained two consecutive identical # statement blocks. Here, we make that statement block the body of a # "while" statement, which repeatedly executes its body until the # difference between "pos" and "neg" is small. # > f := cow;\ pos := 2.5;\ neg := 1.5;\ \ while (abs(pos-neg) >= 1e-6) do\ avg := (pos + neg) / 2.0; \ if (f(avg) >= 0) then\ pos := avg;\ else\ neg := avg;\ fi;\ od; -------------------------------------------------------------------------------- # # (9.27) After the while loop above has run, the value of "avg" is fairly # close to being a root of "f". # > f(avg); -------------------------------------------------------------------------------- # # (Figure 9.8) This procedure encapsulates the bisection process. It takes # three parameters: the "function" to be solved, a "positive" guess such # that function(positive)>0, and a "negative" guess such that # function(negative)<0. It returns an approximation to a root of # "function". # > bisection := proc (function, positive, negative)\ \ local f, avg, pos, neg;\ \ f := function;\ pos := positive;\ neg := negative; \ while (abs(pos-neg) >= 1e-6) do\ avg := (pos + neg) / 2.0; \ if (f(avg) >= 0) then\ pos := avg;\ else\ neg := avg;\ fi;\ od;\ \ RETURN(avg);\ \ end; -------------------------------------------------------------------------------- # # (9.28) We use the "bisection" procedure to approximate a root of "cow". # > bisection(cow, 2.5, 1.5); -------------------------------------------------------------------------------- # # (9.29) We compare the roots to "cow" found by "bisection" and "fsolve". # They differ in the last three digits of their mantissas, which is evidence # that the convergence criterion used by "bisection" could be improved. # > bisection(cow, 2.5, 1.5);\ fsolve(cow(theta), theta); -------------------------------------------------------------------------------- # # (9.30) If we try to use "bisection" to solve this simple linear equation, # the while loop that it contains will never terminate because the positive # and negative guesses will never get very close together. If you evaluate # this expression, you will need to interrupt the execution. # > bisection((x) -> x - 20000., 30000., 10000.); -------------------------------------------------------------------------------- # # (9.31) If we try to use "bisection" to solve this linear equation, we won't # get very close to a root (on a relative basis). The root is so small that the # positive and negative guesses get within 10^(-6) very quickly. # > bisection((x) -> x - 1e-12, 1.0, 0); -------------------------------------------------------------------------------- # # (9.32) In this equation, the two guesses are so close together to begin # with that the while loop within "bisection" is never executed, which # means that "avg" is never given a value, which means that this useless # symbolic result is reported. # > bisection((x) -> x - 1e-12, 2e-8, 0); -------------------------------------------------------------------------------- # # (9.33) If we supply faulty positive and negative guesses, "bisection" will # converge to one of the guesses, and not to a root. # > bisection((x) -> cos(x) - x, 1.0, 0.5); -------------------------------------------------------------------------------- # # (Figure 9.9) This is an improved version of "bisection". It makes sure # that "positive" and "negative" have the correct properties, make sure that # a number is returned as the result under all circumstances, and uses an # improved convergence criterion. # > bisection := proc (function, positive, negative)\ \ local f, avg, pos, neg;\ \ f := function;\ pos := positive;\ neg := negative;\ \ if (f(pos) < 0) then\ print(`Positive bound to bisection is bad`);\ RETURN();\ fi;\ \ if (f(neg) > 0) then\ print(`Negative bound to bisection is bad`);\ RETURN();\ fi;\ \ avg := pos;\ \ while (avg <> (pos+neg)/2.0) do\ avg := (pos + neg) / 2.0; \ if (f(avg) >= 0) then\ pos := avg;\ else\ neg := avg;\ fi;\ od;\ \ RETURN(avg);\ \ end; -------------------------------------------------------------------------------- # # (9.34) We compare the improved "bisection" to "fsolve". Now they # differ only in the last digit of the mantissa. # > bisection(cow, 2.5, 1.5);\ fsolve(cow(theta), theta); -------------------------------------------------------------------------------- >