Part G: Numerical Integration and Visualization

In the following we begin to look at higher-dimensional dynamical systems than last time, when we explored 1D maps, and than in the exercises, when we integrated the 1D pitchfork ODE. The first systems are 2D maps (Baker's, Henon, and Coupled Logistic maps). Then we move on to continuous-time flows specified by 2D and 3D ordinary differential equations. These require that we develop the necessary generalization of the Euler integration method from the 1D version we have already.

The first examples use matplotlib to visualize behaviors in two-dimensions. When we get to the 3D flows, however, we will introduce a convenient 3D interactive visualization package available for Python.

Visualizing two-dimensional maps

Here are Python programs for exploring two-dimensional maps. They use matplotlib in ways familiar from last time, when we looked at various structures in one-dimensional maps.

These notes introduce the programs and explain a few interesting facts or features, either about the implementation or the system they simulate. The notes do not exhaust what you can do. So implicit in the following are suggestions to modify the simulation parameters, initial conditions, and so on to explore the programs' structure and the systems' behaviors. The notes sometimes will make statements without explanation. If something like this is unclear, consider it an opportunity to explore and try to answer the questions you have with the simulations. You have the tools in your hands!

Dissipative Baker's Map: BakersMap.py simply iterates the dissipative Baker's map and plots the resulting points, which lie on a self-similar fractal set.

Try matplotlib's display window's pan and zoom feature to explore the self-similarity. The claim is that the more you zoom in, the more you see the same structure. Is this true?

Another claim is that the self-similarity is uniform across the attractor. Trying zooming in, starting at different locations on the attractor. Is the self-similarity uniform?

If you find the pan-zoom action too slow, this could be due to there being too many iterates. You can reduce these and try again. Of course, the fewer the points, the less easy it is too see detailed self-similar structures.

One important trick to note in how the program implements the Baker's map is that its x-coordinate does not use 2 xn specified in the map's equations. Instead it uses 1.9999999 xn. Why? Run the simulation with the parameter set to 2.0. What happens?

This is due to the fact that this coordinate of the Baker's map is a one-dimension map, the so-called shift map. The latter shifts up the bits in the initial condition x0, throwing away the most significant bit. Since the simulation uses double-precision floating point numbers that have about 54 bits for the fractional part, it takes only n = 54 iterations for the least-significant bit to become the most significant. Due to deterministic round-off specified in the IEEE floating point arithmetic standard used by Python, the result is that the iterates xn = 0.0000... at that iteration and beyond. Only if our arithmetic had infinite precision would we see an aperiodic, chaotic orbit.

The trick to get around this is to replace 2.0 xn with its equivalent representation 1.9999999... xn, the bits in the iterates xn get shuffled more and they behave like what the mathematics says should happen.

This serves as a first hint at some of the subtleties of simulating chaotic systems (and non-chaotic systems, for that matter) on finite-precision computers.

A more physically realistic solution than the above trick would be to replace the IEEE standard deterministic round-off algorithm with one that randomly flips the least significant bit. This, at least, mimics the reality that no real-world system has infinite precision, but interacts with other systems that add noise to the state.

The plotting is rather straightforward. One thing to note, though, is how the plot is set up to put the points in a unit square [0,1] x [0,1]. If this is not done, plot() auto-scales based on the range of data given to it and so the unit square is not shown, instead there's a rectangle circumscribing the attractor. The x and y coordinates are not uniform in this case.

Henon Map: HenonMap.py is rather similar to the previous program, except that it's replaced the Baker's map function definition with that for the Henon map. The latter is a quadratic map and so the various stability (and instability) properties depend on where in the plane the state is. One consequence is that the uniformity observed for the Baker's map does not hold for the Henon map attractor.

Explore this. In particular, look at the attractor's self-similarity with the pan-zoom feature of the display window. What do you see? One thing is that the self-similarity is not exactly the same, as it was for the Baker's map, as you zoom in on different points of the attractor.

Henon Mapping a Line: HenonMapLine.py also explores the Henon map, but rather than iterating a single initial condition (a set of zero dimension), it iterates a one-dimensional line of initial conditions. It is often very instructive to see how a nonlinear system acts on sets of different dimension and structure: points, lines, collections of closely packed initial conditions, and so on.

In the case of the Henon map, the effect on the line of initial conditions makes the stretching and folding mechanism rather clear. Zoom in on various iterates of the line to see how the line gets longer with number of iterates and also increasingly more folded.

A look ahead and back: In two dimensions the stable and unstable manifolds of hyperbolic fixed points are one-dimensional. Thus, we can modify the program to explore how the stable manifold of the fixed point on the Henon attractor is organized.

Recall that the stable manifold of a point consists of all those states that tend to the point in forward time. It turns out, by happy coincidence, that we can invert the Henon map and so run it in reverse time. We can use this to iterate points very near the fixed point in the stable direction, which we get from calculating the eigenvectors at the point. The result is that we can use HenonMapLine.py to map out the stable manifold of this (and other) fixed points in the Henon map by iterating the “line” of initial conditions just described.

You might want to give this a try ... or you can wait until next week.

Two Coupled Logistic Maps: CoupledLogisticMaps.py: This program shows how to build higher-dimensional nonlinear systems from a lower dimensional one: It simulates two logistic maps that are coupled together.

Explore what happens as you vary initial conditions and parameters. Note that at some settings the iterates will fall out of the unit square and fly off to infinity. How would you correct for this?

One of the interesting properties of two coupled logistic maps is that there are multiple attractors. This is a bit odd, since the maps individually have a single attractor: all points in the interval are in its basin of attraction. So what's going on when two maps are coupled together?

Three attractors (at least): CoupledLogisticMaps.3ICs.py: This program illustrates that, at the parameters used, there are (at least!) three attractors.

Use the display window pan and zoom functions to look at the three attractors. Are they all of the same type or do they differ?

Depending on where the initial condition is you will see a two-band chaotic attractor, an 8-band chaotic attractor near diagonal, or a period-4 limit cycle.

There are others! Can you find them? How would you go about mapping which initial condition goes to which attractor? That is, how would you map out the attractors' basins of attraction?

Numerically Integrating Ordinary Differential Equations

So far the dynamics tools have simulated discrete-time mappings. The “simulation” method was utterly trivial: simply apply a function to the current state. However, many natural phenomena vary continuously in time and are typically modeled by ordinary differential equations (ODEs) in which the state changes differentially in an infinitesimal amount of time. Simulating ODEs, as a consequence, requires more care and there are a variety of approximation techniques for solving—that is, integrating—them to get solutions.

Here we'll explore three: Euler integration, an improved version, and then the Runge-Kutta method, which will be our preferred method.

These are explained in lecture notes available as a PDF.

Euler method for integrating 1D ODEs: Euler1D.py. Using the Euler method this program integrates the pitchfork-bifurcation ODE from four different ICs. (An exercise in a previous lab.) Notice the non-smooth behavior: These are errors in the integration method. They can be reduced if the time step dt is set smaller. This, of course, requires more computation. Despite the errors early on, the simulation does seem to be working: We know there are stable fixed points at x = 1 and x = -1 and an unstable one at the origin. And this is what we see. But is it really working? Let's compare with an improved version of the Euler method.

Improved Euler method for 1D ODEs: ImprovedEuler1D.py. The time step size is the same as above, but now notice the smooth solutions. Again they go to the stable fixed points. But the trajectories are different. What is the truth? What is the ODE really doing? Let's do another comparison.

Fourth-order Runge-Kutta method for 1D ODEs: RK1D.py. The solutions are very similar to the improved Euler. So, we have two integration methods that give similar answers and so we would be more confident that the simulated behavior is how the ODE behaves.

However, is the increased complexity of the algorithm (and code) and increased amount of computing worthwhile? For this simple and low-dimensional ODE, probably not. (But stay tuned.)

But how would we convince ourselves of that an integration method is correct?

The primary rule is that all methods should start to agree with each other and with the “real” behavior as the time step dt goes to zero.

Try this with the Euler method. To what size of dt do you have to go so that its solutions appear as well behaved as with the other two methods?

Which method to use?

What the previous programs show is that, at the same time step size, we get better (much!) solutions with the improved Euler and (four-order) Runge-Kutta methods.

It turns out, though, that a very good overall trade-off between speed of integration and accuracy comes with using the fourth-order Runge-Kutta method.

There are special cases, though, where even it does badly in its estimates. So we should always be aware that any integration method makes only estimates of the true trajectories.

Higher dimensions

Fourth-order Runge-Kutta method for 2D ODEs: RK2D.py.

This code illustrates the fourth-order Runge-Kutta integrator applied the van der Pol 2D ODE oscillator, with parameters set so that there is a stable limit cycle.

Though the code for a 2D Runge-Kutta method is more detailed than that for 1D, it's simply the same idea applied to vectors.

The Euler and improved Euler integrators do markedly worse on this system.

Visualization in 3D: MayaVi

MayaVi is a relatively accessible way to program dynamic and interactive 3D objects and scenes in Python.

For reference information see the documentation Users Guide.

To use MayaVi from iPython start iPython with the following command-line flag:
ipython --pylab
The need for this comes from the fact that MayaVi runs its graphical display window as a separate process (thread), with iPython running in parallel as its own process.

Here is a simple example to show how to visualize simulated behavior in three dimensions using MayaVi. The programming exercises this week ask you to modify the example to make the simulation more accurate. We will need this modified program for building the more complex tools next week.

But first, a simple example: MayaViTest.py
# A quick way to interact with the MayaVi display
#
from enthought.mayavi import mlab

  # Bring up the MayaVi window (without the object browser)
  #
figure = mlab.figure()

  # One of several built-in tests
  #
mlab.test_quiver3d()

The program starts by importing the module "mlab" which is a simple Python scripting interface to Mayavi for 3D plotting.

The default is to bring up a basic graphical display window.

The built-in test function test_quiver3d draws arrows to show a simple vector field.

You can zoom in and out, and rotate the display interactively in MayaVi. Exactly how you do this depends on the system you're running.

Mac:

Windows and Linux:

Notice the buttons at the top of the MayaVi graphics window.

The buttons labeled X, Y, and Z toggle views along the respective axes.

Another button gives an isometric view.

Another turns references axes on and off.

And yet another changes to full screen mode; the ESC key returns from full screen to normal display.

Get comfortable with these ways to interact with MayaVi.

The Lorenz Attractor in 3D LorenzODE.py:

This program simulates the Lorenz ODE, showing the resulting trajectory in its 3D state space.

You can rotate and zoom in and out in the display window. Can you see the geometry of the attractor's branched manifold? Look especially at the region near the z-axis, where the two lobes of the attractor are sutured together. What's the geometry of this suturing?

Note the use of the Euler integration scheme. We know this isn't an accurate method. So, the exercises ask you to make the Euler scheme a function and then to add another integration function that implements the fourth-order Runge-Kutta method.

Play around with this and look at code.

There are many more things one can do with MayaVi module.

For example, use iPython to find out about the available functions using the ? and the tab operators.

As noted, we'll come back to explore more of the features next week. For now you can look at the example programs on the MayaVi website.


Table of Contents