|
Installing Numerical Python To use Numerical Python, you'll need to install it first. For the details, see our handy guide with instructions and download locations here. |
On its own, Python is a potent tool for mathematics. When extended with the addition of two modules, Numerical Python and the DISLIN data visualization tool, Python becomes a powerhouse for numeric computing and problem solving. Numerical Python (NumPy) extends the Python language to handle matrices and supports the associated mathematics of linear algebra. Like Python, NumPy is a collaborative effort. The principal code writer was Jim Hugunin, a student at MIT. Paul Dubois at Lawrence Livermore National Labs and a few other major supporters (Konrad Hinsen and Travis Oliphant) eventually took on the project.
While NumPy and Python make a good combination, the ability of Python to serve the scientific community is not complete without a data visualization package. An excellent choice for data visualization is the DISLIN package freely provided to the Python community by Helmut Michels of the Max-Planck Institute. DISLIN is an extensive cross platform (Win32/Linux/Unix) library that can be accessed in many languages, including Python. With its support for 2D and 3D plotting and basic drawing commands, you can create informative and eye-catching graphics.
Over the next five months I will show you how to use both NumPy and DISLIN in real applications, starting this week with a review of the useful and interesting world of matrices and linear algebra. You may feel like you are having a flashback to high-school math, but stick with me. It gets interesting.
A matrix is a homogenous collection of numbers with a particular shape, like a table of numbers of the same type. The most common matrices are one or two-dimensional. One-dimensional matrices are also called vectors. A, b, and C are examples of matrices; a and b could also be called vectors.

Although a and b both contain the same elements, they are different in shape. The shape of a matrix is described by the number of rows and columns. The a matrix can be described as either a column vector or a 3-by-1 matrix, b is a row-vector or a 1-by-3 matrix. The C matrix is and example of a 3-by-3 matrix.
You add and subtract matrices, as you might expect, element by element - with only a slight twist. The shape of the matrices must be the same. Thus the following are valid operations,
![]()
but
![]()
is not.
Matrix multiplication is a more complicated operation. For addition the matrices have to be the same size, but in multiplication only the inner dimensions have to be the same: the columns of the first matrix must be the same as the rows of the second matrix. The resulting matrix will be the size of the remaining outer dimensions. As an example, multiplication of a 1-by-3 matrix by a 3-by-1 matrix results in a 1-by-1 matrix (3 being the 'inner' dimension). If a 5-by-2 matrix is multiplied by a 2-by-5 matrix, a 5-by-5 matrix results.
![]()
This formula describes how the rows of matrix a and the columns of matrix b can be multiplied to generate matrix c. The indices in c(i,j) are used to specify the element located at the ith row and jth column of the new c matrix. (The same concept holds for the a and b matrices.) The Greek letter Sigma stands for summation. Think of it as a kind of for loop from k=1 to n where n is the size of the inner dimension. Add the resulting value of each trip through the loop together.
The following shows the difference between matrices with the same values but different shapes. The first example is a 2-by-1 matrix multiplied by a 1-by-2 matrix resulting in a 2-by-2 matrix. The second example is a 1-by-2 matrix multiplied by a 2-by-1 matrix resulting in a 1-by-1 matrix (also known as a scalar).
Matrix division is undefined; the concept is replaced by the matrix inverse. The inverse of a matrix A is defined by the following equation:
![]()
When a matrix and its inverse are multiplied together, the result is the identity matrix or I. The identity matrix is a square matrix where all elements are 0 except along the main diagonal (going from upper left to lower right). A 3-by-3 identity matrix is shown

While the matrix inverse is a bit different from traditional division, the concept follows directly from scalar mathematics where:
![]()
There are couple of limitations: the matrix inverse is only defined for square matrices (same number of rows as columns), and it may not exist for certain sets of elements. This is similar to how dividing a scalar by zero is undefined. You just can't work out an inverse for every set of numbers. Where the inverse does exist, it is very useful for solving difficult equations, as we will see later in this tutorial.
|
Enough math review -- let's see how it looks in Numerical Python. The NumPy distribution brings to Python the concept of the multi-array. This is the ability to store sets of homogenous numbers in shaped containers. Sounds like a matrix, right? The multi-array (or array) can hold numeric values of any type (integers, floating point numbers, and complex values). The Numeric package also provides extensions to the mathematical functions to cover array arguments as well as scalar. In addition several other modules are provided (LinearAlgebra, FFT, RanLib and Matrix to name a few). We will use the LinearAlgebra module later to create an inverse.
NumPy provides for operations that extend beyond those defined in traditional matrix mathematics like we just reviewed. Although these operations are nonstandard, they make for more efficient computation in certain circumstances.
It is important for you to know that NumPy defines the operation of the binary operators '*' for multiplication and '/' for division to operate element-wise as in addition and subtraction (shape still matters, however). To perform traditional matrix multiplication or to create an inverse you will need to use explicit functions, matrixmultiply() and inverse().
To use the features of NumPy you must import it:
>>> from Numeric import *.
Once the module has been imported, you generate an array with the array() function. The array() function takes two arguments, a tuple or list of values and an optional type code, e.g., Float or Int. The list may be nested to create a multi-dimensional array. The array() function will create an array with the values and type specified. If no type is specified, the type of the array will be dependent on the type of the elements.
>>> a=array((1,2))
>>> print a
[1 2]
Creates an integer array since the values in the tuple were integers.
>>> b=array((1,2),Float)
>>> print b
[ 1. 2.]
creates a floating-point array using integer arguments. The floating-point type overrides the integer arguments.
>>> c=array(((1,2),(3,4)))
>>> print c
[[1 2]
[3 4]]
creates a multi-dimensional array using a multi-dimensional tuple.
The shape attribute of an array is expressed as a tuple. To see the shape simply type
>>> a=array((1,2))
>>> a.shape
(2,)
>>> c=array(((1,2),(3,4)))
>>> c.shape
(2, 2)
Note that a has only a single dimension (the shape tuple has only a single value) and that c is multidimensional (two values in the shape tuple). By default, single dimensional arrays have only a length. The concept of rows and columns is not embodied in the structure of a single dimensional array. The shape of an array, however, can be altered via the reshape() function. The reshape function takes two arguments: an array and a shape tuple.
>>> a=array((1,2))
>>> a.shape
(2,)
>>> print a
[1 2]
>>> ma=reshape(a,(1,2))
>>> ma.shape
(1, 2)
>>> print ma
[ [1 2]]
>>> mb=reshape(a,(2,1))
>>> mb.shape
(2, 1)
>>> print mb
[[1]
[2]]
The printout changes when you use reshape to give a single-dimensioned array a shape more like a traditional matrix. In the case of the 1-by-2 matrix ma, the extra space shows the multidimensional nature of the matrix. Contrast the printing of a with ma. See the difference between the original and the reshaped form? The elements are the same, but the shape is different.
Elements are taken row-wise from the source array when we reshape this 2-by-2 array:
>>> c=array(((1,2),(3,4)))
>>> c.shape
(2, 2)
>>> print c
[[1 2]
[3 4]]
>>> d=reshape(c,(1,4))
>>> d.shape
(1, 4)
>>> print d
[ [1 2 3 4]]
matrixmultiply() do we get the results we expect.
>>> ma
array([ [1, 2]])
>>> mb
array([[1],
[2]])
>>> ma*mb
array([[1, 2],
[2, 4]])
>>> mb*ma
array([[1, 2],
[2, 4]])
>>> matrixmultiply(ma,mb)
array([ [5]])
>>> matrixmultiply(mb,ma)
array([[1, 2],
[2, 4]])
The multiplication operation is governed by actions called 'Pseudo Indices.' They really are useful. I will explain them in a later article. For now, stick with the matrixmultiply() function to achieve the expected results.
To generate a matrix inverse, you simply use the inverse() function found in the LinearAlgebra module supplied along with the basic NumPy. To gain access to the function, first perform an import.
>>> From LinearAlgebra import *
A simple example shows how to use the function
>>> a=array(((3,2),(2,4)),Float)
>>> print a
[[ 3. 2.]
[ 2. 4.]]
>>> a_inv = inverse(a)
>>> print a_inv
[[ 0.5 -0.25 ]
[-0.25 0.375]]
To check the result, multiply a_inv by a (which should give the identity matrix).
>>> matrixmultiply(a_inv,a)
array([[ 1.00000000e+000, 1.11022302e-016],
[ 0.00000000e+000, 1.00000000e+000]])
|
With these abilities in hand, what can you do? In another flashback to math class, the topic of simultaneous linear equations (also known as solving multiple equations with multiple unknowns) comes to mind. In the following example, the goal is to find the values x and y that make both equations true (at the same time, of course).
![]()
This is the classic problem of finding the place where two lines intersect in the plane. The lines may represent different phenomena, perhaps profit vs. cost in economics or growth vs. decay in biology. In any case, solving for the intersection can be tedious. Add a few more variables, and it can be very difficult.
You can generate the answer by hand or with a computer. The trick to answering the problem with matrices is to represent the equations as two 2-by-1 matrices that are equal. Note how we use matrix shape, multiplication, and inverses to find our answer:
![]()
This can be rewritten as a product of a 2-by-2 matrix with a 1-by-2 matrix (the unknowns) being equal to a 2-by-1 matrix:
![]()
Matrix mathematics has a few more rules than traditional algebra does. One of those is seen in the equation below. In order to solve for the unknowns, z needs to be isolated on the left. Thinking algebraically, your first instinct might be to divide through by A. But matrix division is not defined! Fortunately, we can use the inverse. Multiply both sides by the inverse (just like you might multiply both sides by 1/A).
![]()

On the left the result is the identity matrix multiplied by z (which is just z). The inverse cancels out matrix A. On the right we are left with the product of A-1b. Since all values are known on the right and only unknowns exist on the left, a solution is at hand! These equations are solved with just an inverse and a multiply.
Want to calculate that inverse and multiply by hand? I don't. Here is how you do it with NumPy:
>>> A=array(((3,4),(5,2)))
>>> b=reshape(array((11,9)),(2,1))
Notice how we generate b in one fell swoop using a combination of the array() and reshape() functions.
>>> Ainv = inverse(A)
>>> z=matrixmultiply(Ainv,b)
>>> z
array([[ 1.],
[ 2.]])
which is the answer. Checking the result with the original matrix A,
>>> matrixmultiply(A,z)
array([[ 11.],
[ 9.]])
which shows the right answer! Hopefully the exercise was less painful than solving things by hand. The power of Python/NumPy is only hinted at by this simple problem. The true power is in solving problems that are larger. If the numbers of unknowns and equations equaled 10, it would be difficult to solve the problem by hand.
I haven't even touched on DISLIN. I am saving that for the next article. Visualization can be very useful. Remember all that graphing of equations? DISLIN can do it for you. In the following plot, the green and red lines are those of the equations above. They intersect at the point x=1 and y=2, which is exactly the answer we got.

In upcoming articles we will explore more useful matrix manipulations with NumPy. We will also graph the results with DISLIN to help us see what it is we are doing. You may find the 2D graph above uninspiring, but check out this graph of 3D information. This graphs the response of a control system (perhaps the anti-lock braking system in a car) at points of instability. The peaks actually rise to infinity and were clipped for plotting purposes.

Want to launch ahead with Numerical Python? Are you struggling to remember that math? Hungry for more? While you are waiting for next month's article, here is where you can find out more. Read the Numerical Python Manual. It was written as a tutorial as well as a reference.
Want to buy some books on linear algebra and matrices? The book Applied Linear Algebra by Ben Noble and James W. Daniel is an excellent read. If your interests lie more in the computational aspect of matrices, the ever popular Matrix Computations by Gene H. Golub and Charles F. Van Loan should also be on your reading list.
Eric Hagemann specializes in fast algorithms for crunching numbers on all varieties of computers from embedded to mainframe.
Discuss this article in the O'Reilly Network Python Forum.
Return to the Python DevCenter.
Copyright © 2009 O'Reilly Media, Inc.