Part E: One-Dimensional Dynamics Numerical Integration

Plotting (continued)

We continue exploring two-dimensional graphics with matplotlib, illustrating with examples how to program one-dimensional map tools: cobweb diagrams and histograms.

Complete programs will be described and links provided so you can download them for study and modification. Be sure to move them to a project directory so that you can modify and save them.

Simple plot simple_plot.py: We start with the simplest example of using matplotlib's plot() function, which we introduced last time. Here, we illustrate how to label axes and provide a heading for the plot.

Recall that plot() has many, many options. You can easily remind yourself of these within iPython by using the plot? query.

Another plot: simpleplot_2.py shows how to plot a user-defined function and how to plot two different graphs in the same display with subplot(). The program also shows how to plot using various line styles and data tokens.

Saving plots to a file

By uncommenting the line near the end you can also save the plot to a PNG bitmap file at a resolution of 600 dots-per-inch (dpi):
savefig('simple_plot_2', dpi=600)

Run the program again and you'll see the screen display, of course. If you look in your working directory, though, you'll see a file simple_plot_2.png.

Generally, you can fire up a browser and open the file to see the saved figure.

The open command is the Macintosh method to view a graphic in its Preview application. So, on a Mac, in iPython you can use a shell escape (!) to see the file in a previewer:
In [12]: !open simple_plot_2.png

If you're on =Linux and the ImageMagick package was installed, you can use its display function to show filename.png on a system with X Windows.
In [13]: !display simple_plot_2.png

For such a simple plot, the png graphic is a fairly big file (327K on my system!):
In [14]: ls -lat simple_plot_2.png
-rw-r--r--  1 chaos  admin  327074 Apr 30 17:57 simple_plot_2.png

This is generally true for bitmap graphics. This is also one reason using a vector-based format, such as PostScript or SVG, is often a good idea. Since only the plotted objects are stored, and not every pixel, vector-based formats save a good deal of space. The other benefits, in addition to saving disk space, is that vector-based formats have scalable resolution and are editable by (say) Adobe Illustrator, in the case of PostScript.

matplotlib allows for PostScript output. Uncomment the line
#savefig('simple_plot_2.ps', dpi=600)

and re-run the program. The output will be in the specified file:
In [38]: cat simple_plot_2.ps
%!PS-Adobe-3.0
%%Title: simple_plot_2.ps
%%Creator: matplotlib version 0.90.0, http://matplotlib.sourceforge.net/
%%CreationDate: Sun Apr 29 17:58:08 2007
...

Lots of output. Try viewing simple_plot_2.ps in a PostScript viewer, if you have one.

It's markedly smaller:
In [16]: ls -lat simple_plot_2.ps
-rw-r--r--  1 chaos  chaos  99477 Apr 30 18:02 simple_plot_2.ps

Multiple plots (continued)

As we saw matplotlib's subplot() function makes it easy to compose graphics with many plots. multiplot.py shows you how to put multiple plots under programmatic control. This is very useful when you have many outputs from a simulation (say), but want an integrated view to get the big picture.

multiplot.py as a graphic is way too busy, but you get the idea: Make Python work for you!

One-Dimensional Dynamics

We now have enough Python and matplotlib use under our belts that we can put the basic elements together to look at time series of logistic map iterates.

The following gives a series of programs, increasing in sophistication, and a short description of the purpose of each. In addition to running them and varying them in the ways suggested, you should study their structure to see how the algorithms are designed to achieve the claimed purpose and how Python features are used.

Logistic map times series: LogisticMap.py produces a series of iterates, starting from an initial condition, storing the result in an array and then passing this array to the plot() function.

I set the control parameter to r = 3.83 which is where one finds a periodic oscillation—a period-3 orbit. Notice that there a complicated transient that eventually settles down to the period-3 orbit. The complicated transient is a hint of the chaos that is nearby in parameter space. That is, the period-3 orbit is in the midst of the chaotic parameter regime of the logistic map.

Sensitivity to initial conditions LogisticMap2ICs.py: It is very instructive to look at the time series from two different, but very close, initial conditions. This time we use a “generic” chaotic parameter value: r = 3.7.

The two initial conditions are only 10-5 away from each, but it takes only twenty or so iterations before they diverge far away from each other.

Plotting the 1D map's function: To begin to understand what's creating the sensitivity to small variations in initial state (and other map properties), we have to go back to the logistic map's function: xn+1 = f(xn). LogisticFunction.py simply plots that function f(x) and its second iterate f2(x). The identity y = x is also plotted for reference.

Wherever the functions cross the identity there are fixed points: values x such that f(x) = x or f2(x) = x. How many fixed points do you see for f(x) and for f2(x)?

How the 1D map creates iterates: LogisticCobweb.py shows a quick way to visualize how iterates are generated by a 1D map—the so-called cobweb diagram. The iterates are “reflected off” of the identity. This is equivalent to taking the next value xn+1 as the new “initial condition” xn. Pay particular attention to how the lines illustrating the reflection are created. Trace through the code so that you understand how this is effected. Change nIterates from very small values to larger values.

The probability distribution of iterates LogisticHistogram.py: We can look at the probability density variation by estimating the probability of how often iterates fall in some small bin on the interval. We do this by estimating a histogram of the map's iterates, which directly counts the number of visits to a pre-selected set of bins on the interval. The program builds its own histogram, rather than use some package's built-in histogram function. This gives us more control and speed and storage efficiency.

Numerically Integrating Ordinary Differential Equations

So far our Python code has simulated discrete-time mappings. The “simulation” method was utterly trivial: simply apply a function to the current state. However, many natural phenomena vary continuously in time and are typically modeled by ordinary differential equations (ODEs) in which the state changes differentially in an infinitesimal amount of time. Simulating ODEs, as a consequence, requires more care and there are a variety of approximation techniques for solving—that is, integrating—them to get solutions.

Here we'll explore three: Euler integration, an improved version, and then the Runge-Kutta method, which is often the preferred method.

These methods are explained in lecture notes available as a PDF.

Euler method for integrating 1D ODEs: Euler1D.py. Using the Euler method this program integrates the so-called pitchfork-bifurcation ODE from four different ICs. Notice the nonsmooth behavior: These are errors in the integration method. They can be reduced if the time step dt is set smaller. This, of course, requires more computation. Despite the errors early on, the simulation does seem to be working: We know there are stable fixed points at x = 1 and x = -1 and an unstable one at the origin. And this is what we see. But is it really working? Let's compare with an improved version of the Euler method.

Improved Euler method for 1D ODEs: ImprovedEuler1D.py. The time step size is the same as above, but now notice the smooth solutions. Again they go to the stable fixed points. But the trajectories are different. What is the truth? What is the ODE really doing? Let's do another comparison.

Fourth-order Runge-Kutta method for 1D ODEs: RK1D.py. The solutions are very similar to the improved Euler. So, we have two integration methods that give similar answers and so we would be more confident that the simulated behavior is how the ODE behaves.

However, is the increased complexity of the algorithm (and code) and increased amount of computing worthwhile? For this simple and low-dimensional ODE, probably not.

But how would we convince ourselves of that an integration method is correct?

The primary rule is that all methods should start to agree with each other and with the “real” behavior as the time step dt goes to zero.

Try this with the Euler method. To what size of dt do you have to go so that its solutions appear as well behaved as with the other two methods?

Which method to use?

What the previous programs show is that, at the same time step size, we get better (much!) solutions with the improved Euler and (four-order) Runge-Kutta methods.

It turns out, though, that a very good overall trade-off between speed of integration and accuracy comes with using the fourth-order Runge-Kutta method.

There are special cases, though, where even it does badly in its estimates. So we should always be aware that any integration method makes only estimates of the true trajectories.

Higher dimensions

Fourth-order Runge-Kutta method for 2D ODEs: RK2D.py.

This code illustrates the fourth-order Runge-Kutta integrator applied the van der Pol 2D ODE oscillator, with parameters set so that there is a stable oscillation.

Though the code for a 2D Runge-Kutta method is more detailed than that for 1D, it's simply the same idea applied to vectors.

The Euler and improved Euler integrators do markedly worse on this system.


Table of Contents