Grade 11/12 Math Circles
Dynamical Systems and Fractals Part 2
Strange Attractors
Last week we looked at discrete dynamical systems of the form where represents the state of some physical
system at time point . We saw that
such systems can have fixed points (when the state of the
system reaches a certain value and stays there indefinitely) and
periodic points (when the state of the system oscillates
between some finite number of values). Dynamical systems (specifically
non-linear dynamical systems) can also exhibit some really interesting
aperiodic behaviour where the state of the system is attracted to a set
of iterates (an orbit) which appears to be completely random
and unpredictable.
One of the most well-known examples of this phenomenon systems is the
logistic map - which is used as a model for the population
growth and decline of a species. The logistic map is a dynamical system
defined as follows: where is a number between zero and one
which represents the ratio of an existing population to the maximum
possible population which can be supported by the environment. The
constant represents the growth
rate of the population. Typically we consider growth rates as values outside of this
range lead to negative populations (which doesn’t make much physical
sense!). For these values of , the
system is always attracted to some sort of stable orbit, which we call
the attractor of the system and the properties of the attractor
depend on the value of .
When is less than one, the
population rapidly dies off to zero (an attractive fixed point),
regardless of the initial population. Then, as is increased we begin to see more
interesting behaviour. When is
between and , the iterates are attracted towards a
non-zero fixed point, i.e. the population approaches a stable value.
This can be seen on the following figure, a plot of the first one
hundred values of for .
When is between and , the iterates are attracted to a stable two-cycle. As
seen below (for ), this
means that for large values of ,
the population oscillates between two fixed values.
When is made even larger, we
see that for values of between
approximately and a four-cycle emerges, as shown on
the following figure.
If we continued to increase ,
we would see the emergence of an attractive -cycle, -cycle, etc... This is referred to as
period-doubling. Eventually, surpasses a critical value of
approximately and the
behaviour changes dramatically. For values of near this critical value most initial
populations lead to orbits which are never periodic and appear random,
and unpredictable, like the orbit shown in the next figure.
Although the iterates look like they could be randomly generated, the
process is entirely deterministic (if we start with the same we will always get the exact same set
of points). This type of behaviour is referred to as chaotic
and the entire field of chaos theory is devoted to the study of such
systems. In this lesson we will focus on one aspect of chaotic systems,
that being the interesting geometric properties of the sets which they
are attracted to (called strange attractors). For example, if we plot a
large number of iterates on the
real number line we can visualize the space occupied by the attractor of
the logistic map in its chaotic regime (). This is shown in the figure
below.
At first glance, it seems like the orbit fills in most of the space
on the real line. However, if we zoom in on a smaller region of the set,
we see that more small gaps emerge. If we continue to zoom in, we would
continue to see the emergence of more and more gaps. Although the
attractor contains an infinite number of points (since the orbit never
repeats itself), these points are all disconnected from each other (even
if we zoom in REALLY close).
In multiple dimensions, strange attractors can look very visually
striking. An example in two dimensions is the Hénon attractor, shown
below.
Strange attractors have some unique mathematical properties, one
being that they have fractional dimensions and are thus called
fractal sets. Briefly, fractal sets are those which are too
irregular to be described by our usual ideas of Euclidean geometry.
Measuring
the Size of Fractal Sets
The precise definition of a fractal set is difficult to pinpoint - in
fact, Benoît Mandelbrot (the mathematician who proposed the term
fractal) is said to have proposed multiple definitions himself. One
thing fractal sets do have in common is the idea of possessing
detail at all scales like we saw in the previous example. This
means that as we zoom in on part of a fractal set more detail continues
to emerge. This strange property of fractal sets makes it difficult to
measure their sizes. As a motivating example, let’s consider the
coastline paradox - which refers to the counterintuitive
observation that the coastline of a landmass does not have a
well-defined length.
Example 1
First, let’s imagine that we want to measure the length of a
well-defined curve. The length of a curve can be approximated by adding
the sum of the straight lines which connect points on the curve, as
shown in the figure below.
If we use only a few straight lines to
approximate the length of the curve, the result is an estimate which is
lower than the true length. As the we make the lines increasingly short
(and thus use more of them) our estimate becomes more accurate and the
sum approaches the curve’s true length (you may have seen this in calculus).
Let’s now consider applying the same procedure to measure the length
of a coastline. The figure below shows the result of attempting to
estimate the length of the coastline of Britain with increasingly small
measuring sticks. The shape of the coastline is highly irregular and we
see that as the size of the measuring sticks is decreased, more and more
detail is picked up, increasing the length estimate. It turns out that,
unlike the smooth curve shown above, in the case of coastlines, the
length estimates do not stabilize, instead they continue increasing as
we use smaller and smaller measuring sticks.
What we have seen is that for irregular curves, the higher the
resolution you use to measure their length (i.e. the smaller the
measuring sticks) the more detail you measure, leading to increasingly
large length estimates. Now, in reality there is a limit to how small we
can make our measuring sticks and thus to the level of detail we can
measure when it comes to physical object like coastlines. On the other
hand, mathematically we can study fractal sets which are infinite in
nature. The simplest type of fractal sets which we can study are
self-similar. This means that when we zoom in on any portion of
the set, it looks similar to the original set. In other words, the set
is made up a number of scaled down copies of itself. Let’s take a look
at an example of a self-similar fractal.
Example 2
The von Koch curve can be constructed by a simple iterative geometric
procedure. Consider the following generator which takes a line segment and removes
the middle third, replacing it with two line segments one third the
length of the original.
We can think of as a special
type of function - only instead of taking numbers as inputs and outputs,
takes sets of points as inputs
and transforms them in a particular way. On its own, this is not all
that interesting, but let’s consider what happens when we apply iteratively.
The curve which results when we let go to infinity is called the von Koch
curve. We see from the figure above that is made up of four copies of
itself, each scaled by a factor of one third.
Let’s now imagine that we want to measure the size of the von Koch
curve. Since it is a curve, our first instinct would be to measure it’s
length. Using the same approach we used for the coastline, we can
approximate the length of the von Koch curve using finite measuring
sticks. If we use measuring sticks of length , then our length estimate is
given by where is the number of measuring
sticks of length which
are required.
Let us assume that the length of the line we started out with in the
construction of the von Koch curve is one (i.e. in Example 2). Then if we use
measuring sticks of length we will clearly need measuring sticks to cover the set.
Setting we
have . If we
shrink the rod further, letting then we will require measuring sticks to cover the
set, resulting in . A pattern is clearly emerging. If we let
, then
Since if we let go to infinity (i.e. make the measuring
rods really tiny), the length estimates will get larger and larger,
never settling on a finite value. Just like the coastline example, the
von Koch curve does not have a finite length!
So how can we measure the size of the von Koch curve? We could try
measuring its area instead. We can estimate the area of an object in a
similar way to length, but instead of measuring sticks we cover the set
with -tiles, squares of
side length , and let . As we
let get very small, we
require approximately the same number of -tiles to cover the set as we
needed measuring sticks of length . Thus for large we have Since , if we let go to infinity, our estimates of the
area will get smaller and smaller, approaching zero.
The von Koch curve has infinite length, but zero
area!
In order to measure the size of the von Koch curve, it turns out we
need some other type of measurement instead, a D-dimensional
measurement, if you will. Consider the estimate where is the number of -tiles required to cover the set.
We can also write this as This definition is actually
consistent with our usual ideas of measurement. If we let then we recover our definition of
length (how we measure one-dimensional objects) and if , we recover the definition of area
(how we measure two-dimensional objects). We know that for the von Koch
curve when we get an infinite
measurement (length), and yet when we get a measurement (area) of zero, neither of which is
particularly meaningful. Perhaps there is some fractional
measure of (between one and two)
which yields a finite measurement.
Let’s once again consider covering the von Koch curve with tiles of
side length . Now the -dimensional estimate of its size is
As we consider
letting go infinity, we see that
if , the result
will be zero and if , the result will be infinite. The only way we will have a
finite measurement is if . Solving for we have
which is a number between one
and two, as expected. This fractional number is called the fractal
dimension of the von Koch curve. For self-similar fractals like the
von Koch curve we can compute this number exactly - in the real world we
use numerical methods to estimate this number.
The result above can be written in another way, in terms of the
scaling relationship between the number of -tiles (or measuring sticks)
needed to cover the set and the number of -tiles (or measuring sticks)
needed to cover the set (where is
a scale factor which adjusts the size of the tiles/measuring
sticks).
The Scaling Relation for Self-Similar Fractal
Sets
Given a self-similar fractal set with dimension the size of is given by the -dimensional measurement assuming that this limit
exists. This means that when is very small (close to zero)
the size of is very closely
approximated by
where is some scaling
factor. Dividing both sides by we are left with the
following important result which we can use
to solve for the dimension, .
Going back to the von Koch curve briefly, it makes sense to let . Then we see that if
is the number of
measuring sticks of length
required to cover the von Koch curve, then . Putting this into the scaling relation we get
Dividing
both sides by and
rearranging gives
which is in agreement with the dimension we found previously. Okay, now
it’s your turn!
Exercise 1
Consider the following generator, .
which acts by removing the middle third of line segments. Repeated
application of results in the
following fractal set, referred to as the Cantor set.
Determine an appropriate value of and use the scaling relation to find
the fractal dimension, , of the
Cantor set.
Letting we see
that if it takes
measuring sticks of length
(or -tiles if you prefer)
to cover the Cantor set, then it will take measuring sticks of length to cover the Cantor
set.
Putting this into the scaling relation we get Solving
for yields .
The Cantor set is somewhere between zero-dimensional and
one-dimensional.
Iterated Function
Systems
As we saw in the previous section, self-similar fractal sets can be
constructed by a recursive procedure starting with a generator which we
have thus far defined graphically. We can make this construction more
precise mathematically using functions. Let’s take another look at the
Cantor set from the previous exercise.
Example 3
Consider the following two functions which map the interval onto itself, and . We
will now examine the repeated action of and on the interval . Instead of looking at where
each function sends a single point , we will consider the action of
each function sends sets of points in .
For instance, we see that
maps the point to and to , and
all other somewhere
inside this interval. We can write this as . In other
words, shrinks the interval
to . Similarly, we find that
.
Now, instead of considering the action of and separately, let’s consider them
acting in unison as a parallel operator, which we will call . We define the action of on any subset of as follows, where denotes taking the union
of two sets, i.e. joining them together. To illustrate:
The parallel operator performs
the same middle thirds dissection procedure used to produce the Cantor
set. As a result, if we let go to
infinity, the limiting set will be the Cantor set.
What we just saw is an example of an iterated function
system. Before we can properly define an iterated function system,
we first need the definition of a contraction mapping.
Definition (Contraction Mapping in One
Dimension)
Let and
( is a function which maps to ). We say that is a contraction
mapping on if there
exists a constant
such that The smallest value of for which the above inequality holds is
called the contraction factor of .
What this definition means is that a contraction mapping maps any two distinct points and in its domain closer to each other.
Example 4
Consider
and from the previous example. These are both
contraction mappings on the domain . To see this, consider and . We see that So is a contraction mapping with
contraction factor .
Using the same approach we can show that .
Thus, both and are contraction mappings with
contraction factor .
Linear functions, like and
above (also called affine
transformations) represent familiar geometrical transformations
including scaling, reflection, translation, and rotation (in two
dimensions). In our construction of self-similar fractal sets we will
only consider affine transformations.
Exercise 2
Consider the linear function . Show that when , is a contraction
mapping on the domain .
Determine the contraction factor of .
To show that is a contraction
mapping, consider and . We see that Since , is a contraction mapping. The
contraction factor of is .
Now that we are familiar with the idea of contraction mappings we can
define an iterated function system (IFS) properly.
Definition (Iterated Function System (IFS) in One
Dimension)
Let be a
set of linear contraction
mappings on a closed and bounded subset , i.e. for each , and there exists a
constant such that
Associated with this set of contraction
mappings is the parallel operator
defined as The set of maps with
parallel operator defines an
-map iterated function
system (IFS) on the set .
The parallel operation of the functions and defined at the beginning of this
section is an IFS which when iterated indefinitely generates the Cantor
set.
Theorem (IFS Attractor)
There exists a unique set which is the fixed point of the parallel IFS operator (as defined above), i.e. Moreover, if
you start with any set
(even a single point), and form the iteration sequence the sequence of sets converges to the fixed-point set
, which is known as the
attractor.
Practically, this means that given a set of contraction maps, we can
quickly construct the attractor set , allowing us to generate images of many
interesting fractal sets. We have defined iterated function systems in
one dimension, however we can also generalize the definition to multiple
dimensions to generate more interesting pictures which lie in two and
three-dimensional space. We won’t formally define this, but we will look
at an example of what this looks like graphically.
Example 5
The Sierpinski triangle is constructed by starting with an
equilateral triangle and applying three contraction maps. The first map
shrinks the triangle by a factor of , creating a smaller triangle
which lines up with the leftmost corner of the original. The second map
also shrinks the triangle by and translates it to the
right so that it lines up with the right most corner of the original.
Finally, the third map also shrinks the triangle by a factor of and translates it to the
right and upwards so that it lines up with the top corner of the
original. The result of the first few iterations of this process is
shown in the figure below. From the figure, we see that the Sierpinski
triangle is self-similar. It is made up of three copies of itself, each
scaled by a factor of .
We could use the scaling relation to determine the dimension of the
Sierpinski triangle, but it will be a little bit more difficult to work
out a covering of the Sierpinski triangle with measuring sticks or
tiles. Instead, we can compute something called the similarity
dimension, which for a self-similar fractal set gives us the same
dimensions we would get from the scaling relation.
Definition (Similarity Dimension)
If we consider a self-similar set which is the union of non-overlapping copies of itself such
that each copy is scaled down by a factor of , then the similarity
dimension of the set is
Using the above definition, the dimension of the Sierpinski triangle
is This is
between one and two, as we might have expected.