Chaos, Bounds, and Randomness Help (page 2)
Introduction to Chaos, Bounds, and Randomness
Have you ever noticed that events seem to occur in bunches? This is more than your imagination. A few decades ago, this phenomenon was analyzed by Benoit Mandelbrot, an engineer and mathematician who worked for International Business Machines (IBM). Mandelbrot noticed that similar patterns are often found in apparently unrelated phenomena, such as the fluctuations of cotton prices and the distribution of personal incomes. His work gave birth to the science of chaos theory.
Was Andrew "Due"?
In the early summer of 1992, south Florida hadn't had a severe hurricane since Betsy in 1965. The area around Miami gets a minimal hurricane once every 7 or 8 years on the average, and an extreme storm once or twice a century. Was Miami ''due'' for a hurricane in the early 1990s? Was it ''about time'' for a big blow? Some people said so. By now you should know enough about probability to realize that 1992 was no more or less special, in that respect, than any other year. In fact, as the hurricane season began in June of that year, the experts predicted a season of below-normal activity.
The so-called ''law of averages'' (which is the basis for a great deal of misinformation and deception) seemed to get its justice on August 24, 1992. Hurricane Andrew tore across the southern suburbs of Miami and the Everglades like a cosmic weed-whacker, and became the costliest hurricane ever to hit the United States up to that date. Did the severity of Andrew have anything to do with the lack of hurricanes during the previous two and a half decades? No. Did Andrew's passage make a similar event in 1993 or 1994 less likely than it would have been if Andrew had not hit south Florida? No. There could have been another storm like Andrew in 1993, and two more in 1994. Theoretically, there could have been a half dozen more like it later in 1992!
Have you ever heard about a tornado hitting some town, followed three days later by another one in the same region, and four days later by another, and a week later by still another? Have you ever flipped a coin for a few minutes and had it come up ''heads'' 18 times in a row, even though you'd normally have to flip it for days to expect such a thing to happen? Have you witnessed some vivid example of ''event-bunching,'' and wondered if anyone will ever come up with a mathematical theorem that tells us why this sort of thing seems to happen so often?
Slumps and Spurts
Athletes such as competitive swimmers and runners know that improvement characteristically comes in spurts, not smoothly with the passage of time. An example is shown in Fig. 7-7 as a graph of the date (by months during a hypothetical year) versus time (in seconds) for a hypothetical athlete's l00-meter (100-m) freestyle swim. The horizontal scale shows the month, and the vertical scale shows the swimmer's fastest time in that month.
Fig. 7-7. Monthly best times (in seconds) for a swimmer whose specialty is the 100-m freestyle, plotted by month for a hypothetical year.
Note that the swimmer's performance does not improve for a while, and then suddenly it does. In this example, almost all of the improvement occurs during the summer training season. That's not surprising, but another swimmer might exhibit performance that worsens during the same training season. Does this irregularity mean that all the training done during times of flat performance is time wasted? The coach will say no! Why does improvement take place in sudden bursts, and not gradually with time? Sports experts will tell you they don't know. Similar effects are observed in the growth of plants and children, in the performance of corporate sales departments, and in the frequency with which people get sick. This is ''just the way things are.''
Correlation, Coincidence, or Chaos?
Sometime in the middle of the 20th century, a researcher noticed a strong correlation between the sales of television sets and the incidence of heart attacks in Great Britain. The two curves followed remarkably similar contours. In fact the shapes of the graphs were, peak-for-peak and valley-for-valley, almost identical.
It is tempting to draw hasty conclusions from a correlation such as this. It seems reasonable to suppose that as people bought more television sets, they spent more time sitting and staring at the screens; this caused them to get less exercise; the people's physical condition therefore deteriorated; this rendered them more likely to have heart attacks. But even this argument, if valid, couldn't explain the uncanny exactness with which the two curves followed each other, year after year. There would have been a lag effect if television-watching really did cause poor health, but there was none.
Do television sets emit electromagnetic fields that cause immediate susceptibility to a heart attack? Is the programming so terrible that it causes immediate physical harm to viewers? Both of these notions seem ''far-out.'' Were the curves obtained by the British researcher coincident for some unsuspected reason? Could it be that people who had heart attacks were told by their doctors to avoid physical exertion while recovering, and this caused them to buy television sets to help pass the time? Or was the whole thing a coincidence? Was there no true correlation between television sales and heart attacks, a fact that would have become apparent if the experiment had continued for decades longer or had involved more people?
Now consider this if you dare: Could the correlation between television sales and heart attacks have taken place as a result of some unfathomable cosmic consonance, even in the absence of a cause-and-effect relationship?
Do scientists sometimes search for nonexistent cause-and-effect explanations, getting more and more puzzled and frustrated as the statistical data keeps pouring in, demonstrating the existence of a correlation but giving no clue as to what is responsible for it? Applied to economic and social theory, this sort of correlation-without-causation phenomenon can lead to some scary propositions. Is another world war, economic disaster, or disease pandemic inevitable because that's ''just the way things are''? Chaos theory suggests that the answer to some of these questions is yes!
Benoit Mandelbrot noticed that patterns tend to recur over various time scales. Large-scale and long-range changes take place in patterns similar to those of small-scale and short-term changes. Events occur in bunches; the bunches themselves take place in similar bunches following similar patterns. This effect exists both in the increasing scale and in the decreasing scale.
Have you noticed that high, cirrostratus clouds in the sky resemble the clouds in a room where someone has recently lit up a cigar? Or that these clouds look eerily like the interstellar gas-and-dust clouds that make up diffuse nebulae in space? Patterns in nature often fit inside each other as if they were nested geometric shapes, as if the repetition of patterns over scale takes place because of some principle ingrained in nature itself. This is evident when you look at the so-called Mandelbrot set (Fig. 7-8) using any of the numerous zooming programs available on the Internet. This set arises from a simple mathematical formula, yet it is infinitely complicated. No matter how much it is magnified – that is, however closely we zoom in on it – new patterns appear. There is no end to it! Yet the patterns show similarity at all scales.
The images in Fig. 7-8 were generated with a freeware program called Fractint. This program was created by a group of experts called the Stone Soup Team. The program itself is copyrighted, but images created by any user become the property of that user.
The Maximum Unswimmable Time
If our hypothetical swimmer keeps training, how fast will he eventually swim the 100-m freestyle? We already know that he can do it in a little more than 48 seconds. What about 47 seconds? Or 46 seconds? Or 45 seconds? There are obvious lower bounds to the time in which the 100-m freestyle can be swum by a human. It's a good bet that no one will ever do it in 10 seconds. How about 11 seconds? Or 12? Or 13? How about 20 seconds? Or 25? Or 30? If we start at some ridiculous figure such as 10 seconds and keep increasing the number gradually, we will at some point reach a figure – let's suppose for the sake of argument that it is 41 seconds – representing the largest whole number of seconds too fast for anyone to swim the 100-m freestyle.
Once we have two whole numbers, one representing a swimmable time (say 42 seconds) and the next smaller one representing an unswimmable time (say 41 seconds), we can refine the process down to the tenth of a second, and then to the hundredth, and so on indefinitely. There is some time, exact to however small a fraction of a second we care to measure it, that represents the maximum unswimmable time (MUST) that a human being can attain for the 100-m freestyle swim. Figure 7-10 shows an educated estimate (translation: wild guess) for this situation.
No one knows the exact MUST for the 100-m freestyle, and a good argument can be made for the assertion that we cannot precisely determine it. But such a time nevertheless exists. How do we know that there is a MUST for the 100-m freestyle, or for any other event in any other timed sport? A well-known theorem of mathematics, called the theorem of the greatest lower bound, makes it plain: ''If there exists a lower bound for a set, then there exists a greatest lower bound (GLB) for that set.'' A more technical term for GLB is infimum. In this case, the set in question is the set of ''swimmable times'' for the 100-m freestyle. The lower bounds are the ''unswimmable times.''
What's the probability that a human being will come to within a given number of seconds of the MUST for the 100-m freestyle in, say, the next 10 years, or 20 years, or 50 years? Sports writers may speculate on it; physicians may come up with ideas; swimmers and coaches doubtless have notions too. But anyone who makes a claim in this respect is only guessing. We can't say ''The probability is 50% that someone will swim the 100-m freestyle in so-and-so seconds by the year such-and-such.'' Remember the old probability fallacy from Chapter 3! For any theoretically attainable time, say 43.50 seconds, one of two things will happen: someone will swim the 100-m freestyle that fast someday, or else no one will.
The Butterfly Effect
The tendency for small events to have dramatic long-term and large-scale consequences is called the butterfly effect. It gets its name from a hypothetical question that goes something like this: Can a butterfly taking off in China affect the development, intensity, and course of a hurricane 6 months later in Florida? At first, such a question seems ridiculous. But suppose the butterfly creates a tiny air disturbance that produces a slightly larger one, and so on, and so on, and so on. According to butterfly-effect believers, the insect's momentary behavior could be the trigger that ultimately makes the difference between a tropical wave and a killer cyclone.
We can never know all the consequences of any particular event. History happens once and only once. We can't make repeated trips back in time and let fate unravel itself multiple times, after tweaking this or that little detail. But events can conspire, or have causative effects over time and space, in such a manner as to magnify the significance of tiny events in some circumstances. There are computer models to show it.
Suppose you go out biking in the rain and subsequently catch a cold. The cold develops into pneumonia, and you barely survive. Might things have turned out differently if the temperature had been a little warmer, or if it had rained a little less, or if you had stayed out for a little less time? There is no practical way to tell which of these tiny factors are critical and which are not. But computer models can be set up, and programs run, that in effect ''replay history'' with various parameters adjusted. In some cases, certain variables have threshold points where a tiny change will dramatically affect the distant future.
In models of chaos, patterns are repeated in large and small sizes for an astonishing variety of phenomena. A good example is the comparison of a spiral galaxy with a hurricane. The galaxy's stars are to the hurricane's water droplets as the galaxy's spiral arms are to the hurricane's rainbands. The eye of the hurricane is calm and has low pressure; everything rushes in towards it. The water droplets, carried by winds, spiral inward more and more rapidly as they approach the edge of the eye. In a spiral galaxy, the stars move faster and faster as they fall inward toward the center. A satellite photograph of a hurricane, compared with a photograph of a spiral galaxy viewed face-on, shows similarities in the appearance of these systems.
Air pressure and gravitation can both, operating over time and space on a large scale, produce the same kind of spiral. Similar spirals can be seen in the Mandelbrot set and in other similar mathematically derived patterns. The Spiral of Archimedes (a standard spiral easily definable in analytic geometry) occurs often in nature, and in widely differing scenarios. It's tempting to believe that these structural parallels are more than coincidences, that there is a cause-and-effect relationship. But what cause-and-effect factor can make a spiral galaxy in outer space look and revolve so much like a hurricane on the surface of the earth?
Today on Education.com
- Kindergarten Sight Words List
- Signs Your Child Might Have Asperger's Syndrome
- Coats and Car Seats: A Lethal Combination?
- Child Development Theories
- GED Math Practice Test 1
- Graduation Inspiration: Top 10 Graduation Quotes
- The Homework Debate
- 10 Fun Activities for Children with Autism
- First Grade Sight Words List
- Social Cognitive Theory