
This worksheet is designed to accompany Chapter 4 of Introduction to Scientific Programming: Computational Problem Solving Using Maple and C by Joseph L. Zachary. In it, we will use Maple to explore the blockstacking problem. (08Oct96)
To use this worksheet you will need to use some Maple extensions that we have created. Read in our blocks package by evaluating the Maple command below. (You will need to have first installed our custom Maple library and configured Maple to use it.)
> with(blocks);
The three functions defined in the library are described in Chapter 4 of the text. Each takes as its parameter the number of blocks in a stack of blocks arranged as described in the text and reports back the amount by which the top block extends beyond the table. For example, let's compute (in three different ways) the extension that can be achieved with 100 blocks.
The "blockFloat" function uses floatingpoint arithmetic to explicitly sum the series . For n=100, that sum is
> blockFloat(100);
The "blockRat" function uses rational arithmetic to sum the same series. After it has computed an exact sum, we use "evalf" to convert the result into a floatingpoint number. The difference between this sum, and the one calculated with "blockRat", can be attributed to roundoff error.
> evalf(blockRat(100));
The "blockFast" function computes the sum of the series, not by adding up each term, but by using the formula . Notice that we get the same answer as with "blockFast".
> blockFast(100);
"blockFloat" can be used to study how roundoff error can accumulate in a series of computations. Let's begin by setting Digits to 4, which means that we will be computing with fourdigit floatingpoint mantissas.
> Digits := 4;
We can now compare the results of calling the three functions for increasingly large numbers of blocks.
> blockFloat(100);
> evalf(blockRat(100));
> blockFast(100);
> blockFloat(200);
> evalf(blockRat(200));
> blockFast(200);
> blockFloat(400);
> evalf(blockRat(400));
> blockFast(400);
> blockFloat(800);
> evalf(blockRat(800));
> blockFast(800);
> blockFloat(1600);
> evalf(blockRat(1600));
> blockFast(1600);
Notice what happens:
Let's go back to 10digit mantissas and explore how roundoff error accumulates when doing a long series of floatingpoint calculations.
> Digits := 10;
Using blockFast, we can find the extension possible with ten thousand blocks. (We save the result in the variable "truth".)
> truth := blockFast(10000);
Let's use blockFloat to do the same calculation using mantissa lengths ranging from 1 to 10. For example, with a fivedigit mantissa,
> Digits := 5;
the result is as follows. (We save the result in the variable "result".)
> result := blockFloat(10000);
and after going back to ten digits
> Digits := 10;
we find the that relative error is approximately 2.8%.
> abs(truth  result) / truth;
You should do similar calculations using for each mantissa length from 1 to 10. When you are done, you will know the relative error for each mantissa length that is exhibited by blockFloat when calculating the extension for ten thousand blocks.
You can visualize the effect of mantissa length on relative error by plotting mantissa length on the xaxis against relative error on the yaxis. To do this, use the command below. It plots the numbers 1 through 10 against some arbitrarily chosen numbers. You will have to replace the arbitrarily chosen numbers with the relative errors that you have calculated. For example, [5, 30] would become [5, .0277]. (Rounding the relative errors to three decimal places is OK.)
> plot([[1, 88], [2, 22], [3, 14], [4, 13], [5, 30],
> [6, 0], [7, 2], [8, 99], [9, 17], [10, 88]],
> 0..10, style=POINT, symbol=BOX,
> title = `Relative error when computing blockFloat(100)`,
> labels = [`Mantissa length`, `Error`]);
It is also interesting to compare the amount of time required by blockFloat and blockRat to do their calculations. Let's begin by going back to tendigit mantissas.
> Digits := 10;
For block stack sizes of 100, 200, 400, 800, 1600, and 3200, let's calculate the time required to calculate the extension using both blockFloat and blockRat. For example, the time required to compute blockFloat(100) is the second number displayed by this sequence of commands. (The first is the result of the call to blockFloat.
> start := time(): blockFloat(100); time()  start;
The timing function is so crude that for quick calculations, it is a good idea to do several calculations and calculate an average. To obtain a better timing result for blockFloat(100), for example, we can do eight identical calculations and then divide the overall result by 8.
> start := time(): blockFloat(100): blockFloat(100): blockFloat(100): blockFloat(100): blockFloat(100): blockFloat(100): blockFloat(100): blockFloat(100); (time()  start) / 8;
For long calculations such as blockFloat(3200), of course, this would be unnecessary and would take an extremely long time.
You should time blockFloat for stack sizes of 100, 200, 400, 800, 1600, and 3200. You can plot your results by completing the following command. You will need to insert the times that you obtain in place of the zeroes.
> plot([[100, 0], [200, 0], [400, 0], [800, 0], [1600, 0], [3200, 0]],
> 0..3200, style=POINT, symbol=BOX,
> title = `Time in seconds required by blockFloat`,
> labels = [`Stack size`, `Time`]);
You should now repeat the timing experiments using blockRat instead of blockFloat. Plot your results by completing the command below.
> plot([[100, 0], [200, 0], [400, 0], [800, 0], [1600, 0], [3200, 0]],
> 0..3200, style=POINT, symbol=BOX,
> title = `Time in seconds required by blockRat`,
> labels = [`Stack size`, `Time`]);
Compare the shapes of the two curves that you create and try to explain the difference.