When Least Is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible by Paul J. Nahin (PDF)

9

 

Ebook Info

  • Published: 2007
  • Number of pages: 392 pages
  • Format: PDF
  • File Size: 22.04 MB
  • Authors: Paul J. Nahin

Description

What is the best way to photograph a speeding bullet? Why does light move through glass in the least amount of time possible? How can lost hikers find their way out of a forest? What will rainbows look like in the future? Why do soap bubbles have a shape that gives them the least area? By combining the mathematical history of extrema with contemporary examples, Paul J. Nahin answers these intriguing questions and more in this engaging and witty volume. He shows how life often works at the extremes–with values becoming as small (or as large) as possible–and how mathematicians over the centuries have struggled to calculate these problems of minima and maxima. From medieval writings to the development of modern calculus to the current field of optimization, Nahin tells the story of Dido’s problem, Fermat and Descartes, Torricelli, Bishop Berkeley, Goldschmidt, and more. Along the way, he explores how to build the shortest bridge possible between two towns, how to shop for garbage bags, how to vary speed during a race, and how to make the perfect basketball shot. Written in a conversational tone and requiring only an early undergraduate level of mathematical knowledge, When Least Is Best is full of fascinating examples and ready-to-try-at-home experiments. This is the first book on optimization written for a wide audience, and math enthusiasts of all backgrounds will delight in its lively topics.

User’s Reviews

Editorial Reviews: Review “This book was terrific fun to read! I thought I would skim the chapters to write my review, but I was hooked by the preface, and read through the first 100 pages in one sitting. . . . [Nahin shows] obvious delight and enjoyment–he is having fun and it is contagious.”—Bonnie Shulman, MAA Online”When Least is Best is clearly the result of immense effort. . . . [Nahin] just seems to get better and better. . . . The book is really a popular book of mathematics that touches on a broad range of problems associated with optimization.”—Dennis S. Bernstein, IEEE Control Systems Magazine”[When Least is Best is] a wonderful sourcebook from projects and is just plain fun to read.” ― Choice”This book is highly recommended.”—Clark Kimberling, Mathematical Intelligener”A valuable and stimulating introduction to problems that have fascinated mathematicians and physicists for millennia.”—D.R. Wilkins, Contemporary Physics”Nahin delivers maximal mathematical enjoyment with minimal perplexity and boredom. . . . [He lets] general readers in on the thrill of riding high-school geometry and algebra to breakthrough insights. . . . A refreshingly lucid and humanizing approach to mathematics.” ― Booklist”Anyone with a modest command of calculus, a curiosity about how mathematics developed, and a pad of paper for calculations will enjoy Nahin’s lively book. His enthusiasm is infectious, his writing style is active and fluid, and his examples always have a point. . . . [H]e loves to tell stories, so even the familiar is enjoyably refreshed.”—Donald R. Sherbert, SIAM Review Review “This is a delightful account of how the concepts of maxima, minima, and differentiation evolved with time. The level of mathematical sophistication is neither abstract nor superficial and it should appeal to a wide audience.”―Ali H. Sayed, University of California, Los Angeles”When Least Is Best is an illustrative historical walk through optimization problems as solved by mathematicians and scientists. Although many of us associate solving optimization with calculus, Paul J. Nahin shows here that many key problems were posed and solved long before calculus was developed.”―Mary Ann B. Freeman, Math Team Development Manager, Mathworks From the Back Cover “This is a delightful account of how the concepts of maxima, minima, and differentiation evolved with time. The level of mathematical sophistication is neither abstract nor superficial and it should appeal to a wide audience.”–Ali H. Sayed, University of California, Los Angeles”When Least Is Best is an illustrative historical walk through optimization problems as solved by mathematicians and scientists. Although many of us associate solving optimization with calculus, Paul J. Nahin shows here that many key problems were posed and solved long before calculus was developed.”–Mary Ann B. Freeman, Math Team Development Manager, Mathworks About the Author Paul J. Nahin is Professor Emeritus of Electrical Engineering at the University of New Hampshire. He is the author of many books, including the bestselling An Imaginary Tale: The Story of the Square Root of Minus One, Duelling Idiots and Other Probability Puzzlers, and Dr. Euler’s Fabulous Formula: Cures Many Mathematical Ills (all Princeton). Excerpt. © Reprinted by permission. All rights reserved. When Least Is BestHow Mathematics Discovered Many Clever Ways to Make Things as Small (or as Large) as PossibleBy Paul J. NahinPrinceton University PressCopyright © 2003 Princeton University PressAll right reserved.ISBN: 978-0-691-13052-1Chapter OneMinimums, Maximums, Derivatives, and Computers 1.1 Introduction This book has been written from the practical point of view of the engineer, and so you’ll see few rigorous proofs on any of the pages that follow. As important as such proofs are in modern mathematics, I make no claims for rigor in this book (plausibility and/or direct computation are the themes here), and if absolute rigor is what you are after, well, you have the wrong book. Sorry! Why, you may ask, are engineers interested in minimums? That question could be given a very long answer, but instead I’ll limit myself to just two illustrations (one serious and one not, perhaps, quite as serious). Consider first the problem of how to construct a gadget that has a fairly short operational lifetime and which, during that lifetime, must perform flawlessly. Short lifetime and low failure probability are, as is often the case in engineering problems, potentially conflicting specifications: the first suggests using low-cost material(s) since the gadget doesn’t last very long, but using cheap construction may result in an unacceptable failure rate. (An example from everyday life is the ordinary plastic trash bag-how thick should it be? The bag is soon thrown away, but we definitely will be unhappy if it fails too soon!) The trash bag engineer needs to calculate the minimum thickness that still gives acceptable performance. For my second example, let me take you back to May 1961, to the morning the astronaut Alan Shepard lay on his back atop the rocket that would make him America’s first man in space. He was very brave to be there, as previous unmanned launches of the same type of rocket had shown a disturbing tendency to explode into stupendous fireballs. When asked what he had been thinking just before blastoff, he replied “I was thinking that the whole damn thing had been built by the lowest bidder.” This book is a math history book, and the history of minimums starts centuries before the time of Christ. So, soon, I will be starting at the beginning of our story, thousands of years in the past. But before we climb into our time machine and travel back to those ancient days, there are a few modern technical issues I want to address first. First, to write a book on minimums might seem to be a bit narrow; why not include maximums, too? Why not write a history of extremas, instead? Well, of course minimums and maximums are indeed certainly intimately connected, since a maximum of y(x) is a minimum of -y(x). To be honest, the reason for the book’s title is simply that I couldn’t think of one I could use with extrema as catchy as is “When Least Is Best.” I did briefly toy with “When Extrema Are xxx” with the xxx replaced with exotic, exciting, and even (for a while, in a temporary fit of marketing madness that I hoped would attract Oprah’s attention), erotic. Or even “Minimums Are from Venus, Maximums Are from Mars.” But all of those (certainly the last one) are dumb, and so it stayed “When Least Is Best.” There will be times, however, when I will discuss maximums, too. And now and then we’ll use a computer as well. For example, consider the problem of finding the maximum value of the rather benign-looking function y(x) = 3 cos(4[pi]x – 1.3) + 5 cos(2[pi]x + 0.5). Some students answer too quickly and declare the maximum value is 8, believing that for some value of x the individual maximums of the two cosine terms will add. That is not the case, however, since it is equivalent to saying that there is some x = [bar]x such that 4[pi] [bar]x – 1.3 = 2[pi]n 2[pi] [bar]x + 0.5 = 2[pi]k, where n and k are integers. That is, those students are assuming there is an [bar]x such that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], n and k integers. Thus, 2n]pi] + 1.3 = 4[pi]k – 1, or 2.3 = 4[pi]k – 2[pi]n = 2[pi](2k – n), or [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] But if this is actually so, then as n and k are integers we would have [pi] as the ratio of integers, i.e., [pi] would be a rational number. Since 1761, however, [pi] has been known to be irrational and so there are no integers n and k. And that means there is no [bar]x such that y([bar]x) = 8, and so [y.sub.max](x) <8. Well, then, what is [y.sub.max](x)? Is it perhaps close to 8? You might try setting the derivative of y(x) to zero to find [bar]x, but that quickly leads to a mess. (Try it.) The best approach, I think, is to just numerically study y(x) and watch what it does. The result is that [y.sub.max](x) = 5.7811, significantly less than 8. My point in showing you this is twofold. First, a computer is often quite useful in minimum studies (and we will use computers a lot in this book). Second, taking the derivative of something and setting it equal to zero is not always what you have to do when finding the extrema of a function. An amusing (and perhaps, for people who like to camp, even useful) example of this is provided by the following little puzzle. Imagine that you have been driving for a long time along a straight road that borders an immense, densely wooded area. It looks enticing, and so you park your car on the side of the road and hike into the woods for a mile along a straight line perpendicular to the road. The woods are very dense (you instantly lose sight of the road when you are just one step into the woods), and after a mile you are exhausted. You call it a day and camp overnight. When you get up the next morning, however, you've completely lost your bearings and don't know which direction to go to get back to your car. You could, if you panic, wander around in the woods indefinitely! But there is a way to travel that absolutely guarantees that you will arrive back at your car's precise location after walking a certain maximum distance (it might take even less). How do you walk out of the woods, and what is the maximum distance you would have to walk? The answer requires only simple geometry-if you are stumped the answer is at the end of this chapter. 1.2 When Derivatives Don't Work Here's another example of a minimization problem for which calculus is not only not required, but in fact seems not to be able to solve. Suppose we have the real line before us (labeled as the x-axis), stretching from -[infinity] to +[infinity]. On this line there are marked n points, labeled in increasing value as [x.sub.1] <[x.sub.2] <... <[x.sub.n]. Let's assume all the [x.sub.i] are finite (in particular [x.sub.1] and [x.sub.n]), and so the interval of the x-axis that contains all n points is finite in length. Now, somewhere (anywhere) on the finite x-axis we mark one more point (let's call it x). We wish to pick x so that the sum of the distances between x and all of the original points is minimized. That is, we wish to pick x so that S = |x - [x.sub.1]| + |x - [x.sub.2]| + ... + |x - [x.sub.n]| is minimized. I've used absolute-value signs on each term to insure each distance is non-negative, independent of where x is, either to the left or to the right of a given [x.sub.i]. Those absolute-value signs may seem to badly complicate matters, but that's not so. Here's why. First, focus your attention on the two points that mark the ends of the interval, [x.sub.1] and [x.sub.n]. The sum of the distances between x and [x.sub.1], and between x and [x.sub.n], is |x - [x.sub.1]| + |x - [x.sub.n]| and this is at least |[x.sub.1] - [x.sub.n]|. If x > [x.sub.n], or if x <[x.sub.1] (i.e., if x is outside the interval), then strict inequality holds, but if x is anywhere inside the interval (i.e., [x.sub.1] [greater than or equal to] x [greater than or equal to] [x.sub.n]) then equality holds. Thus, the minimum value of |x - [x.sub.1]|+|x-[x.sub.n]| is achieved by placing x anywhere between [x.sub.1] and [x.sub.n]. Next, shift your attention to the two points [x.sub.2] and [x.sub.n-1]. We can repeat the above argument, without modification, to conclude that the minimum value of |x - [x.sub.2]| + |x - [x.sub.n-1]| is achieved when x is anywhere between [x.sub.2] and [x.sub.n-1]. Note that this automatically satisfies the condition for minimizing the value of |x - [x.sub.1]| + |x - [x.sub.n]|, i.e., placing x anywhere between [x.sub.2] and [x.sub.n-1] minimizes |x - [x.sub.1]| + |x - [x.sub.2]| + |x - [x.sub.n-1]| + |x - [x.sub.n]|. You can now see that we can repeat this line of reasoning, over and over, to conclude |x - [x.sub.3]| + |x - [x.sub.n-2]| is minimized by placing x anywhere between [x.sub.3] and [x.sub.n-2], |x - [x.sub.4]| + |x - [x.sub.n-3]| is minimized by placing x anywhere between [x.sub.4] and [xn.sub.-3], and finally, if we suppose that n is an even number of points, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] So, we simultaneously satisfy all of these individual minimizations by placing x anywhere between [x.sub.n/2] and [x.sub.(n/2)+1] (if n is even), and this of course minimizes S. But what if n is odd? Then the same reasoning as for even n still works, until the final step; then there is no second point to pair with [x.sub.(n+1)/2]. Thus, simply let x = [x.sub.(n+1)/2] and so |x - [x.sub.(n+1)/2]| = 0, which is certainly the minimum value for a distance. Thus, we have the somewhat unexpected, noncalculus solution that, for n even, S is minimized by placing x anywhere in an interval, but for n odd there is just one, unique value for x (the middle [x.sub.i]) that minimizes S. 1.3 Using Algebra to Find Minimums As another elementary but certainly not a trivial example of the claim that derivatives are not always what you want to calculate, consider the fact that ancient mathematicians knew that of all rectangles with a given perimeter it is the square that has the largest area. (This is a special result from a general class of maximum/minimum questions of great historical interest and practical value called isoperimetric problems, and I'll have more to say about them in the next chapter.) Ask most modern students to show this and you will almost surely get back something like the following. Define P to be the given perimeter of a rectangle, with x denoting one of the two side lengths. The other side length is then (P - 2x)/2, and so the area of the rectangle is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] A(x) is maximized by setting d A/dx = 1/2 P -2x equal to zero, and so x = 1/4P, which completes the proof. Using only algebra, however, an ancient mathematician could have argued that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] since (x - [(P/4)).sup.2] [greater than of equal to] 0 for all x. That is, A is never larger than the constant [P.sup.2]/16 and is equal to [P.sup.2]/16 if and only if (a useful phrase I will henceforth write as simply iff) x = P/4, which completes the ancient, noncalculus proof. As a final comment on this result, which again illustrates the intimate connection between minimum and maximum problems, we can restate matters as follows: of all rectangles with a given area, the square has the smallest perimeter. This is the so-called dual of our original problem and, indeed, all isoperimetric problems come in such pairs. I'll prove this particular dual in section 1.5. Another useful isoperimetric result that seems much like the one just established-one also known to the precalculus, ancient mathematicians-is not so easy to prove: of all the triangles with the same area, the equilateral has the smallest perimeter. See if you can show this (or its dual) before I do it later in this chapter. We can use the previous result-of all rectangles with a fixed perimeter, the square has the maximum area-to solve without calculus a somewhat more complicated appearing problem found in all calculus textbooks. Suppose we wish to enclose a rectangular plot of land with a fixed length of fencing, with the side of a barn forming one side of the enclosure. How should the fencing now be used? We could, of course, use calculus as follows: let x be the length of each of the two sides perpendicular to the barn wall, and l - 2x be the length of the side parallel to the barn wall (l is the fixed, total length of the fencing). Then the enclosed area is A = x (l - 2x) = xl - [2x.sup.2] and so dA/dx = l - 4x, which, when set equal to zero, gives x = 1/4 l. Thus, l - 2x = 1/2 l, which says the enclosed area is maximized when it is twice as long as it is wide. But this solution is far more sophisticated than required. Simply imagine that we enclose another rectangular area on the other side of the barn wall. We already know that, together, the two rectangular plots should form a square, and so each of the two rectangular plots are half of the square, i.e., twice as long in one dimension as in the other. Our ancient mathematician's trick of completing the square is a very old one, and some historians claim that it can be found implicit in Euclid's Elements (Book 6, Proposition 27), circa 300 B.C. There, the problem discussed is equivalent to that of dividing a constant into two parts so that their product is maximum. So, if the constant is ITLITL, then the two parts are x and C - x, with the product [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Thus, as [(x - (ITLITL/2)).sup.2] [greater than or equal to] 0 for all x, then M is never larger than [ITLITL.sup.2]/4 and is equal to [ITLITL.sup.2]/4 iff x = ITLITL/2. Stated this way, Euclid's problem surely seems rather abstract, but in 1573 the Dutch mathematical physicist Christiaan Huygens gave a nice physical setting to the calculation. Suppose we have a line and two points (A and B) not on the line. Where should the point ITLITL be located on the line so that the sum of the squares of the distances from ITLITL to A and ITLITL to B, [(AC).sup.2] + [(BC).sup.2], is minimum? With no loss in generality we can draw the geometry of this problem as shown in figure 1.1, with A on the y-axis. The figure shows A and B on the same side of the line, and places ITLITL between A and B, but as the analysis continues you'll see that these assumptions in no way affect the result. In the notation of the figure we are to find the value of x that, with a, b, and c constants, minimizes [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Now, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Thus, we need to minimize the product x(x - b); but we already know from Euclid how to do that-set x = 1/2b. That is, ITLITL is midway between A and B. If you redraw figure 1.1 so that either x > b or x <0, and then write the expression for [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], you'll see that the result is unchanged. (Continues...) Excerpted from When Least Is Bestby Paul J. Nahin Copyright ©2003 by Princeton University Press. Excerpted by permission. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site. Read more Reviews from Amazon users which were colected at the time this book was published on the website: ⭐The contents of the book is great, but the binding on the paperback was terrible. It literally fell apart while I was reading it. Gave up on finishing it after half the pages had fallen out. ⭐Fast service. Product in mint condition. ⭐When Least is Best' is an interesting discussion of alternates to calculus for minimum/maximum problems. While the math is not formidable, I found it uninviting. Yet, the methods are valuable. I may keep this book. ⭐The book is well written but you have to be a professional mathematician or a math major at the very least to understand it. By the third page, they are already doing complicated integrals and number theory that can leave you standing in the dust. ⭐Well written, some interesting topics, some everyday topics. ⭐Finally, a solid book that challenges the lay reader just like the best math teachers do - by showing the elegance and power of mathematical reasoning.This is top shelf material. Nahin is one heck of writer and must be one hell of a teacher! Bravo!Already ordered his book on the history of imaginary numbers.6 stars: ****** ⭐Excellent book. Paul Nahin's style is very clear, making the concepts interesting and accessible. Highly recommended. ⭐Great Book...as are all of Paul Nahin's Books ⭐I've read a number of Nahin's popular mathematics books, they are all truly impressive and never fail to be enlightening - often enough astounding; this book, "When least is best" is no exception, in fact I think I'd go as far as to say it is my favourite so far. Like many of Nahin's other such books this book takes a mathematical concept and traces the concept's history and applications. This particular book is about maximization and minimization - what shape should a length of ox-hide take to enclose the greatest area for example (the famous Queen Dido puzzle), derivation of Snell's law via consideration of the time-minimized path of the light ray (with special mention for the work of the always amazing Fermat) and on that subject, why are the primary and secondary rainbows seen just where they are in the sky and where (if they exist) do we see the tertiary and subsequent rainbows, what shape does a hanging chain / soap bubble suspended between two rings take, how fast should your coach driver go to minimize the chance of your being splattered by road mud, and so on, and so on. Anyone who is familiar with Nahin's books will know that he doesn't shy away from the mathematics and here you'll find integrals galore including, but by no means limited to, a derivation of the Euler-Lagrange equation for the minimization/maximization of functionals. The preface re-states the quote from Stephen Hawking's "Brief History of Time" that each equation will halve the books readership; a quote that no doubt has some truth; but Nahin is firm in his belief that there is a readership out there that really cares about and enjoys this stuff - he's right of course, and he managed to annoy Brian Clegg (author of Math-Lite pop-sci books) in the process (http://www.popularscience.co.uk/?p=1789), which was very funny. To get the most from this book does require some mathematical background and this is given by Nahin with reference to US measures; to give a UK basis I would say that you certainly need a fair amount of calculus (although not near as much as some of his other books) and a good grasp of algebraic manipulation, and the required level is probably within the grasp of an A-level Further Mathematics or possibly Mathematics student, you definitely don't need a degree (although it wouldn't hurt ;-)). Nahin also considers the notion of "mathematical maturity" in the preface and gives as an example the question: Can an irrational number raised to an irrational power ever give a rational number result? The answer is yes and an example is given in stages - is root(2)^root(2) rational or irrational? Don't know, well what about [root(2)^root(2)]^root(2)? This last is, of course, equal to root(2)^2 which equals 2, a rational number, and that proves that the answer to the original question is "Yes". If you can follow this reasoning and understand why this is a proof then you are definitely thinking in the right way to make the most of this book.I chose this book in particular because I've been considering taking a Masters course in "Calculus of Variations" (one of the subjects of this book): I've struggled to get past the first chapter of the book of that title by Gelmand and Fomin ⭐Calculus of Variations (Dover Books on Mathematics) ⭐but since reading "When Least is Best" I've completely (well almost) grasped the concept and have been able to follow the Gelmand and Fomin book much more easily. And this is not an isolated event, anyone struggling with Fourier analysis could do no better than to pick up ⭐Dr. Euler's Fabulous Formula: Cures Many Mathematical Ills by Nahin, Paul J. (2011) Paperback ⭐for an utterly enjoyable view into the subject area.A gushing review I know but I really can't recommend this author's books highly enough. ⭐All is simple if You knows ⭐Private reading. Very good book. ⭐I own about five of Paul J. Nahin’s more than dozen math & science books. What shouldn’t again be applied, though, is Sir Roger Penrose’s remark, that every equation in a book reduces the number of potential readers by half. If that would be true, I should be the only one left...By contrast, any reader to follow the text - and this means oftentimes using your brain and paper and pencil – will profit from the author’s consummate sabbatical year-long literature search for prime examples of how to minimize or maximize geometrical, physical, and economical quantities.Nahin starts the paperback edition (which is reviewed here) with a discussion of some corrections as well as feedback to the hard cover edition, and also offers a new challenge problem, which is solved in the appendix. Then there is a longer preface not quite on target to the book’s title, but this too is characteristic of the author’s prose, who as an electrical engineer doesn’t like like to be fenced in mathematically.This is why, in Chapter 1, we are presented with a mixed bag of minimization and maximization problems from algebra, physics, and geometry, which are tackled with different methods besides using derivatives, namely basic algebraic reasoning, or the inequality, stating that the arithmetic mean is always less or equal than the geometric mean, as well as computer solutions.The famous problem of finding the maximum area surrounded by a perimeter of given length, which not unexpectedly ends up to be full or half circular, comprises most of Chapter 2, only to be formally solved much later in the book, though.Chapter 3 sets the stage for the dawn of calculus with Regiomontanus’ medieval problem of maximizing the viewing angle to a hanging picture, which is to be found in almost every modern calculus text, but can still be solved without using derivatives. For other problems, even old ones, Nahin does not shy away from computer solutions, when analytic ones are too cumbersome.Chapter 4 exposes the little known fact, that Pierre de Fermat, in solving quite a few optimization problems, was thereby laying the foundations of calculus already, the formalization of which later was achieved by Leibniz and Newton. In doing this Fermat outclassed his rival Descartes, especially by stating the principle of least time, which correctly explains Snell’s law of refraction. Although Descartes’ proof of this law was flawed, he still was correctly applying it to spherical raindrops and thereby showed that the rainbow is a caustic due to a minimum of the deflection angle for sun-rays.Nahin, at the end of Chapter 5, does perform these geometric optics calculations in the modern way by using derivatives for both the primary and secondary rainbow. He then expands this analysis to the tertiary rainbow, which both Newton and Halley had already predicted. At the time of writing his book, Nahin had no other clue than to accept their predictions, that the tertiary rainbow would never be observed in nature. However, this was proved to be wrong in the years 2011-14, when three amateur photographers from Germany and the Netherlands obtained and published images of the third, fourth, and fifth rainbow order produced by sunlight.For the most part, Chapter 5 explains the birth of the derivative, some of its rules and its application to a wide range of problems in algebra, geometry, kinematics, and mechanics, the latter with an example, that equilibrium is attained, when potential energy is at a minimum. Who has read this far, has finished more than half of the book’s 372 pages.Chapter 6, with 79 pages the book’s longest, bears the title “Beyond Calculus”, and leaves the level of what a typical freshman year student will master without additional resources. Galilei’s fastest track problem for a sliding object (the so called brachistochrone), which was first solved to be the cycloid by Johann Bernoulli, asked for new methods of optimizing whole functions instead of just a certain value. This is now handled systematically by using the calculus of variations, invented by the 18th century mathematicians Lagrange and Euler, which demands the minimization of a certain integral or functional. Nahin demonstrates this method for both the brachistochrone and the catenary (hanging chain equilibrium curve), then continues with showing a proof of the isoperimetric problem at last, and finally exposes the quite advanced problem of minimal area surfaces, the solutions of which happen to be analogous to soap bubble films.Chapter 7 heralds “The Modern Age” of optimization by pointing to a small selection of problems from discrete mathematics, like the optimal placement of service points w.r. to output facilities, the shortest path through a network of nodes, composing a cost-optimal diet from a given selection of nutrients, or deriving optimal production plans and schedules. For lack of space to build up the linear algebraic and graph theoretical foundations of this field, Nahin chooses to give his readers an idea of two widely used methods at least, i.e. linear and dynamic programming, which are linked with the 20th century mathematicians Dantzig and Bellman.The book ends with 36 pages of appendices of mathematical proofs and morsels and solutions to challenge problems.In conclusion, “When Least Is Best”, although meticulously type-set and profusely illustrated with figures and diagrams in black and white, is certainly not a coffee table book, but rather a highly recommended addition to any application math lover’s roster, and a valuable resource of ideas for teachers at the senior high school or college levels. Be prepared, however, to having to re-do many of Nahin’s calculations step by step again, even when perusing his book recurrently. ⭐Very well written. Never boring.

Keywords

Free Download When Least Is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible in PDF format
When Least Is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible PDF Free Download
Download When Least Is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible 2007 PDF Free
When Least Is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible 2007 PDF Free Download
Download When Least Is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible PDF
Free Download Ebook When Least Is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible

Previous articleA Beautiful Question: Finding Nature’s Deep Design by Frank Wilczek (PDF)
Next articleSeven Wonders of the Cosmos by (PDF)